00:00:00.000  Started by upstream project "autotest-nightly-lts" build number 2461
00:00:00.000  originally caused by:
00:00:00.000   Started by upstream project "nightly-trigger" build number 3722
00:00:00.000   originally caused by:
00:00:00.000    Started by timer
00:00:00.175  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.175  The recommended git tool is: git
00:00:00.176  using credential 00000000-0000-0000-0000-000000000002
00:00:00.177   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.212  Fetching changes from the remote Git repository
00:00:00.214   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.252  Using shallow fetch with depth 1
00:00:00.252  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.252   > git --version # timeout=10
00:00:00.286   > git --version # 'git version 2.39.2'
00:00:00.286  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.310  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.310   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:05.249   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:05.261   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:05.274  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:05.274   > git config core.sparsecheckout # timeout=10
00:00:05.285   > git read-tree -mu HEAD # timeout=10
00:00:05.299   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:05.317  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:05.318   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:05.429  [Pipeline] Start of Pipeline
00:00:05.440  [Pipeline] library
00:00:05.441  Loading library shm_lib@master
00:00:05.441  Library shm_lib@master is cached. Copying from home.
00:00:05.454  [Pipeline] node
00:00:05.476  Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest
00:00:05.478  [Pipeline] {
00:00:05.485  [Pipeline] catchError
00:00:05.486  [Pipeline] {
00:00:05.496  [Pipeline] wrap
00:00:05.502  [Pipeline] {
00:00:05.507  [Pipeline] stage
00:00:05.509  [Pipeline] { (Prologue)
00:00:05.520  [Pipeline] echo
00:00:05.521  Node: VM-host-SM16
00:00:05.525  [Pipeline] cleanWs
00:00:05.534  [WS-CLEANUP] Deleting project workspace...
00:00:05.534  [WS-CLEANUP] Deferred wipeout is used...
00:00:05.540  [WS-CLEANUP] done
00:00:05.755  [Pipeline] setCustomBuildProperty
00:00:05.861  [Pipeline] httpRequest
00:00:06.214  [Pipeline] echo
00:00:06.216  Sorcerer 10.211.164.20 is alive
00:00:06.226  [Pipeline] retry
00:00:06.228  [Pipeline] {
00:00:06.243  [Pipeline] httpRequest
00:00:06.247  HttpMethod: GET
00:00:06.248  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.248  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.250  Response Code: HTTP/1.1 200 OK
00:00:06.250  Success: Status code 200 is in the accepted range: 200,404
00:00:06.251  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.685  [Pipeline] }
00:00:06.703  [Pipeline] // retry
00:00:06.709  [Pipeline] sh
00:00:06.994  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:07.008  [Pipeline] httpRequest
00:00:07.312  [Pipeline] echo
00:00:07.313  Sorcerer 10.211.164.20 is alive
00:00:07.323  [Pipeline] retry
00:00:07.324  [Pipeline] {
00:00:07.335  [Pipeline] httpRequest
00:00:07.338  HttpMethod: GET
00:00:07.339  URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:07.339  Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:07.341  Response Code: HTTP/1.1 200 OK
00:00:07.341  Success: Status code 200 is in the accepted range: 200,404
00:00:07.342  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:27.675  [Pipeline] }
00:00:27.694  [Pipeline] // retry
00:00:27.702  [Pipeline] sh
00:00:27.985  + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:30.559  [Pipeline] sh
00:00:30.903  + git -C spdk log --oneline -n5
00:00:30.903  c13c99a5e test: Various fixes for Fedora40
00:00:30.903  726a04d70 test/nvmf: adjust timeout for bigger nvmes
00:00:30.903  61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11
00:00:30.903  7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched
00:00:30.903  ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges
00:00:30.920  [Pipeline] writeFile
00:00:30.933  [Pipeline] sh
00:00:31.215  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:00:31.227  [Pipeline] sh
00:00:31.509  + cat autorun-spdk.conf
00:00:31.509  SPDK_TEST_UNITTEST=1
00:00:31.509  SPDK_RUN_FUNCTIONAL_TEST=1
00:00:31.509  SPDK_TEST_NVME=1
00:00:31.509  SPDK_TEST_BLOCKDEV=1
00:00:31.509  SPDK_RUN_ASAN=1
00:00:31.509  SPDK_RUN_UBSAN=1
00:00:31.509  SPDK_TEST_RAID5=1
00:00:31.509  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:00:31.516  RUN_NIGHTLY=1
00:00:31.518  [Pipeline] }
00:00:31.532  [Pipeline] // stage
00:00:31.546  [Pipeline] stage
00:00:31.548  [Pipeline] { (Run VM)
00:00:31.560  [Pipeline] sh
00:00:31.843  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:00:31.843  + echo 'Start stage prepare_nvme.sh'
00:00:31.843  Start stage prepare_nvme.sh
00:00:31.843  + [[ -n 5 ]]
00:00:31.843  + disk_prefix=ex5
00:00:31.843  + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]]
00:00:31.843  + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]]
00:00:31.843  + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf
00:00:31.843  ++ SPDK_TEST_UNITTEST=1
00:00:31.843  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:31.843  ++ SPDK_TEST_NVME=1
00:00:31.843  ++ SPDK_TEST_BLOCKDEV=1
00:00:31.843  ++ SPDK_RUN_ASAN=1
00:00:31.843  ++ SPDK_RUN_UBSAN=1
00:00:31.843  ++ SPDK_TEST_RAID5=1
00:00:31.843  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:00:31.843  ++ RUN_NIGHTLY=1
00:00:31.843  + cd /var/jenkins/workspace/ubuntu22-vg-autotest
00:00:31.843  + nvme_files=()
00:00:31.843  + declare -A nvme_files
00:00:31.843  + backend_dir=/var/lib/libvirt/images/backends
00:00:31.843  + nvme_files['nvme.img']=5G
00:00:31.843  + nvme_files['nvme-cmb.img']=5G
00:00:31.843  + nvme_files['nvme-multi0.img']=4G
00:00:31.843  + nvme_files['nvme-multi1.img']=4G
00:00:31.843  + nvme_files['nvme-multi2.img']=4G
00:00:31.843  + nvme_files['nvme-openstack.img']=8G
00:00:31.843  + nvme_files['nvme-zns.img']=5G
00:00:31.844  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:00:31.844  + ((  SPDK_TEST_FTL == 1  ))
00:00:31.844  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:00:31.844  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:00:31.844  + for nvme in "${!nvme_files[@]}"
00:00:31.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G
00:00:31.844  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:00:31.844  + for nvme in "${!nvme_files[@]}"
00:00:31.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G
00:00:32.103  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:00:32.103  + for nvme in "${!nvme_files[@]}"
00:00:32.103  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G
00:00:32.103  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:00:32.103  + for nvme in "${!nvme_files[@]}"
00:00:32.103  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G
00:00:32.103  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:00:32.103  + for nvme in "${!nvme_files[@]}"
00:00:32.103  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G
00:00:32.103  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:00:32.103  + for nvme in "${!nvme_files[@]}"
00:00:32.103  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G
00:00:32.362  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:00:32.362  + for nvme in "${!nvme_files[@]}"
00:00:32.362  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G
00:00:32.621  Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:00:32.621  ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu
00:00:32.621  + echo 'End stage prepare_nvme.sh'
00:00:32.621  End stage prepare_nvme.sh
00:00:32.634  [Pipeline] sh
00:00:32.917  + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:00:32.917  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2204
00:00:32.917  
00:00:32.917  DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant
00:00:32.917  SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk
00:00:32.917  VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest
00:00:32.917  HELP=0
00:00:32.917  DRY_RUN=0
00:00:32.917  NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,
00:00:32.917  NVME_DISKS_TYPE=nvme,
00:00:32.917  NVME_AUTO_CREATE=0
00:00:32.917  NVME_DISKS_NAMESPACES=,
00:00:32.917  NVME_CMB=,
00:00:32.917  NVME_PMR=,
00:00:32.917  NVME_ZNS=,
00:00:32.917  NVME_MS=,
00:00:32.917  NVME_FDP=,
00:00:32.917  SPDK_VAGRANT_DISTRO=ubuntu2204
00:00:32.917  SPDK_VAGRANT_VMCPU=10
00:00:32.917  SPDK_VAGRANT_VMRAM=12288
00:00:32.917  SPDK_VAGRANT_PROVIDER=libvirt
00:00:32.917  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:00:32.917  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:00:32.917  SPDK_OPENSTACK_NETWORK=0
00:00:32.917  VAGRANT_PACKAGE_BOX=0
00:00:32.917  VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:00:32.917  FORCE_DISTRO=true
00:00:32.917  VAGRANT_BOX_VERSION=
00:00:32.917  EXTRA_VAGRANTFILES=
00:00:32.917  NIC_MODEL=e1000
00:00:32.917  
00:00:32.917  mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt'
00:00:32.917  /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest
00:00:35.452  Bringing machine 'default' up with 'libvirt' provider...
00:00:36.020  ==> default: Creating image (snapshot of base box volume).
00:00:36.020  ==> default: Creating domain with the following settings...
00:00:36.020  ==> default:  -- Name:              ubuntu2204-22.04-1711172311-2200_default_1734132846_c773a586a50df943b99b
00:00:36.020  ==> default:  -- Domain type:       kvm
00:00:36.020  ==> default:  -- Cpus:              10
00:00:36.020  ==> default:  -- Feature:           acpi
00:00:36.020  ==> default:  -- Feature:           apic
00:00:36.020  ==> default:  -- Feature:           pae
00:00:36.020  ==> default:  -- Memory:            12288M
00:00:36.020  ==> default:  -- Memory Backing:    hugepages: 
00:00:36.020  ==> default:  -- Management MAC:    
00:00:36.020  ==> default:  -- Loader:            
00:00:36.020  ==> default:  -- Nvram:             
00:00:36.020  ==> default:  -- Base box:          spdk/ubuntu2204
00:00:36.020  ==> default:  -- Storage pool:      default
00:00:36.020  ==> default:  -- Image:             /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1734132846_c773a586a50df943b99b.img (20G)
00:00:36.020  ==> default:  -- Volume Cache:      default
00:00:36.020  ==> default:  -- Kernel:            
00:00:36.020  ==> default:  -- Initrd:            
00:00:36.020  ==> default:  -- Graphics Type:     vnc
00:00:36.020  ==> default:  -- Graphics Port:     -1
00:00:36.020  ==> default:  -- Graphics IP:       127.0.0.1
00:00:36.020  ==> default:  -- Graphics Password: Not defined
00:00:36.020  ==> default:  -- Video Type:        cirrus
00:00:36.020  ==> default:  -- Video VRAM:        9216
00:00:36.020  ==> default:  -- Sound Type:	
00:00:36.020  ==> default:  -- Keymap:            en-us
00:00:36.020  ==> default:  -- TPM Path:          
00:00:36.020  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:00:36.020  ==> default:  -- Command line args: 
00:00:36.020  ==> default:     -> value=-device, 
00:00:36.020  ==> default:     -> value=nvme,id=nvme-0,serial=12340, 
00:00:36.020  ==> default:     -> value=-drive, 
00:00:36.020  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 
00:00:36.020  ==> default:     -> value=-device, 
00:00:36.020  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:36.020  ==> default: Creating shared folders metadata...
00:00:36.020  ==> default: Starting domain.
00:00:37.928  ==> default: Waiting for domain to get an IP address...
00:00:47.910  ==> default: Waiting for SSH to become available...
00:00:48.846  ==> default: Configuring and enabling network interfaces...
00:00:53.037  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:00:58.309  ==> default: Mounting SSHFS shared folder...
00:00:59.688  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output
00:00:59.688  ==> default: Checking Mount..
00:01:00.256  ==> default: Folder Successfully Mounted!
00:01:00.256  ==> default: Running provisioner: file...
00:01:00.824      default: ~/.gitconfig => .gitconfig
00:01:01.083  
00:01:01.083    SUCCESS!
00:01:01.083  
00:01:01.083    cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use.
00:01:01.083    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:01.083    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm.
00:01:01.083  
00:01:01.092  [Pipeline] }
00:01:01.107  [Pipeline] // stage
00:01:01.117  [Pipeline] dir
00:01:01.117  Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt
00:01:01.119  [Pipeline] {
00:01:01.132  [Pipeline] catchError
00:01:01.134  [Pipeline] {
00:01:01.146  [Pipeline] sh
00:01:01.427  + vagrant ssh-config --host vagrant
00:01:01.427  + sed -ne /^Host/,$p
00:01:01.427  + tee ssh_conf
00:01:04.742  Host vagrant
00:01:04.742    HostName 192.168.121.177
00:01:04.742    User vagrant
00:01:04.742    Port 22
00:01:04.742    UserKnownHostsFile /dev/null
00:01:04.742    StrictHostKeyChecking no
00:01:04.742    PasswordAuthentication no
00:01:04.742    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204
00:01:04.742    IdentitiesOnly yes
00:01:04.742    LogLevel FATAL
00:01:04.742    ForwardAgent yes
00:01:04.742    ForwardX11 yes
00:01:04.742  
00:01:04.752  [Pipeline] withEnv
00:01:04.753  [Pipeline] {
00:01:04.763  [Pipeline] sh
00:01:05.041  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:01:05.041  		source /etc/os-release
00:01:05.041  		[[ -e /image.version ]] && img=$(< /image.version)
00:01:05.041  		# Minimal, systemd-like check.
00:01:05.041  		if [[ -e /.dockerenv ]]; then
00:01:05.041  			# Clear garbage from the node's name:
00:01:05.041  			#  agt-er_autotest_547-896 -> autotest_547-896
00:01:05.041  			#  $HOSTNAME is the actual container id
00:01:05.041  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:01:05.041  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:01:05.041  				# We can assume this is a mount from a host where container is running,
00:01:05.041  				# so fetch its hostname to easily identify the target swarm worker.
00:01:05.041  				container="$(< /etc/hostname) ($agent)"
00:01:05.041  			else
00:01:05.041  				# Fallback
00:01:05.041  				container=$agent
00:01:05.041  			fi
00:01:05.041  		fi
00:01:05.041  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:01:05.041  
00:01:05.312  [Pipeline] }
00:01:05.328  [Pipeline] // withEnv
00:01:05.336  [Pipeline] setCustomBuildProperty
00:01:05.350  [Pipeline] stage
00:01:05.353  [Pipeline] { (Tests)
00:01:05.369  [Pipeline] sh
00:01:05.648  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:01:05.920  [Pipeline] sh
00:01:06.199  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:01:06.471  [Pipeline] timeout
00:01:06.471  Timeout set to expire in 1 hr 30 min
00:01:06.473  [Pipeline] {
00:01:06.486  [Pipeline] sh
00:01:06.765  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:01:07.333  HEAD is now at c13c99a5e test: Various fixes for Fedora40
00:01:07.345  [Pipeline] sh
00:01:07.627  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:01:07.900  [Pipeline] sh
00:01:08.180  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:01:08.454  [Pipeline] sh
00:01:08.735  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo
00:01:08.994  ++ readlink -f spdk_repo
00:01:08.994  + DIR_ROOT=/home/vagrant/spdk_repo
00:01:08.994  + [[ -n /home/vagrant/spdk_repo ]]
00:01:08.994  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:01:08.994  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:01:08.994  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:01:08.994  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:01:08.994  + [[ -d /home/vagrant/spdk_repo/output ]]
00:01:08.994  + [[ ubuntu22-vg-autotest == pkgdep-* ]]
00:01:08.994  + cd /home/vagrant/spdk_repo
00:01:08.994  + source /etc/os-release
00:01:08.994  ++ PRETTY_NAME='Ubuntu 22.04.4 LTS'
00:01:08.994  ++ NAME=Ubuntu
00:01:08.994  ++ VERSION_ID=22.04
00:01:08.994  ++ VERSION='22.04.4 LTS (Jammy Jellyfish)'
00:01:08.994  ++ VERSION_CODENAME=jammy
00:01:08.994  ++ ID=ubuntu
00:01:08.994  ++ ID_LIKE=debian
00:01:08.994  ++ HOME_URL=https://www.ubuntu.com/
00:01:08.994  ++ SUPPORT_URL=https://help.ubuntu.com/
00:01:08.994  ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
00:01:08.994  ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
00:01:08.994  ++ UBUNTU_CODENAME=jammy
00:01:08.994  + uname -a
00:01:08.994  Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
00:01:08.994  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:01:08.994  Hugepages
00:01:08.994  node     hugesize     free /  total
00:01:08.994  node0   1048576kB        0 /      0
00:01:08.994  node0      2048kB        0 /      0
00:01:08.994  
00:01:08.994  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:09.253  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:01:09.253  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:01:09.253  + rm -f /tmp/spdk-ld-path
00:01:09.253  + source autorun-spdk.conf
00:01:09.253  ++ SPDK_TEST_UNITTEST=1
00:01:09.253  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:09.253  ++ SPDK_TEST_NVME=1
00:01:09.253  ++ SPDK_TEST_BLOCKDEV=1
00:01:09.253  ++ SPDK_RUN_ASAN=1
00:01:09.253  ++ SPDK_RUN_UBSAN=1
00:01:09.253  ++ SPDK_TEST_RAID5=1
00:01:09.253  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:09.253  ++ RUN_NIGHTLY=1
00:01:09.253  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:09.253  + [[ -n '' ]]
00:01:09.253  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:01:09.253  + for M in /var/spdk/build-*-manifest.txt
00:01:09.253  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:09.253  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:01:09.253  + for M in /var/spdk/build-*-manifest.txt
00:01:09.253  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:09.253  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:01:09.253  ++ uname
00:01:09.253  + [[ Linux == \L\i\n\u\x ]]
00:01:09.253  + sudo dmesg -T
00:01:09.253  + sudo dmesg --clear
00:01:09.253  + dmesg_pid=2095
00:01:09.253  + sudo dmesg -Tw
00:01:09.253  + [[ Ubuntu == FreeBSD ]]
00:01:09.253  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:09.253  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:09.253  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:09.253  + [[ -x /usr/src/fio-static/fio ]]
00:01:09.253  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:09.253  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:09.253  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:09.253  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:01:09.253  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:09.253  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:09.253  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:09.253  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:09.253  Test configuration:
00:01:09.253  SPDK_TEST_UNITTEST=1
00:01:09.253  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:09.253  SPDK_TEST_NVME=1
00:01:09.253  SPDK_TEST_BLOCKDEV=1
00:01:09.253  SPDK_RUN_ASAN=1
00:01:09.253  SPDK_RUN_UBSAN=1
00:01:09.253  SPDK_TEST_RAID5=1
00:01:09.253  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:09.253  RUN_NIGHTLY=1   23:34:39	-- common/autotest_common.sh@1689 -- $ [[ n == y ]]
00:01:09.253    23:34:39	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:01:09.253     23:34:39	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:09.253     23:34:39	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:09.254     23:34:39	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:09.254      23:34:39	-- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:09.254      23:34:39	-- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:09.254      23:34:39	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:09.254      23:34:39	-- paths/export.sh@5 -- $ export PATH
00:01:09.254      23:34:39	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:09.254    23:34:39	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:01:09.254      23:34:39	-- common/autobuild_common.sh@440 -- $ date +%s
00:01:09.254     23:34:39	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734132879.XXXXXX
00:01:09.254    23:34:39	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734132879.OhIHSK
00:01:09.254    23:34:39	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:01:09.254    23:34:39	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:01:09.254    23:34:39	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:01:09.254    23:34:39	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:01:09.254    23:34:39	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:01:09.254     23:34:39	-- common/autobuild_common.sh@456 -- $ get_config_params
00:01:09.254     23:34:39	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:01:09.254     23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.254    23:34:39	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f'
00:01:09.254   23:34:39	-- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:09.254   23:34:39	-- spdk/autobuild.sh@12 -- $ umask 022
00:01:09.254   23:34:39	-- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:01:09.254   23:34:39	-- spdk/autobuild.sh@16 -- $ date -u
00:01:09.513  Fri Dec 13 23:34:39 UTC 2024
00:01:09.513   23:34:39	-- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:09.513  LTS-67-gc13c99a5e
00:01:09.513   23:34:39	-- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:09.513   23:34:39	-- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:09.513   23:34:39	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:01:09.513   23:34:39	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:09.513   23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.513  ************************************
00:01:09.513  START TEST asan
00:01:09.513  ************************************
00:01:09.513  using asan
00:01:09.513   23:34:39	-- common/autotest_common.sh@1114 -- $ echo 'using asan'
00:01:09.513  
00:01:09.513  real	0m0.001s
00:01:09.513  user	0m0.000s
00:01:09.513  sys	0m0.000s
00:01:09.513   23:34:39	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:09.513  ************************************
00:01:09.513  END TEST asan
00:01:09.513  ************************************
00:01:09.513   23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.513   23:34:39	-- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:09.513   23:34:39	-- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:09.513   23:34:39	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:01:09.513   23:34:39	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:09.513   23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.513  ************************************
00:01:09.513  START TEST ubsan
00:01:09.513  ************************************
00:01:09.513  using ubsan
00:01:09.513   23:34:39	-- common/autotest_common.sh@1114 -- $ echo 'using ubsan'
00:01:09.513  
00:01:09.513  real	0m0.000s
00:01:09.513  user	0m0.000s
00:01:09.513  sys	0m0.000s
00:01:09.513   23:34:39	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:09.513  ************************************
00:01:09.513  END TEST ubsan
00:01:09.513  ************************************
00:01:09.513   23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.513   23:34:39	-- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:09.513   23:34:39	-- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:09.513   23:34:39	-- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:09.513   23:34:39	-- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:09.513   23:34:39	-- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:09.513   23:34:39	-- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:01:09.513   23:34:39	-- spdk/autobuild.sh@58 -- $ unittest_build
00:01:09.513   23:34:39	-- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build
00:01:09.513   23:34:39	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:01:09.513   23:34:39	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:09.513   23:34:39	-- common/autotest_common.sh@10 -- $ set +x
00:01:09.513  ************************************
00:01:09.513  START TEST unittest_build
00:01:09.513  ************************************
00:01:09.513   23:34:39	-- common/autotest_common.sh@1114 -- $ _unittest_build
00:01:09.513   23:34:39	-- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared
00:01:09.513  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:01:09.513  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:01:10.081  Using 'verbs' RDMA provider
00:01:25.224  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done.
00:01:37.427  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:01:37.427  Creating mk/config.mk...done.
00:01:37.427  Creating mk/cc.flags.mk...done.
00:01:37.427  Type 'make' to build.
00:01:37.427   23:35:06	-- common/autobuild_common.sh@408 -- $ make -j10
00:01:37.427  make[1]: Nothing to be done for 'all'.
00:01:52.307  The Meson build system
00:01:52.307  Version: 1.4.0
00:01:52.307  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:01:52.307  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:01:52.307  Build type: native build
00:01:52.307  Program cat found: YES (/usr/bin/cat)
00:01:52.307  Project name: DPDK
00:01:52.307  Project version: 23.11.0
00:01:52.307  C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0")
00:01:52.307  C linker for the host machine: cc ld.bfd 2.38
00:01:52.307  Host machine cpu family: x86_64
00:01:52.307  Host machine cpu: x86_64
00:01:52.307  Message: ## Building in Developer Mode ##
00:01:52.307  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:52.307  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:01:52.307  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:01:52.307  Program python3 found: YES (/usr/bin/python3)
00:01:52.307  Program cat found: YES (/usr/bin/cat)
00:01:52.307  Compiler for C supports arguments -march=native: YES 
00:01:52.307  Checking for size of "void *" : 8 
00:01:52.307  Checking for size of "void *" : 8 (cached)
00:01:52.307  Library m found: YES
00:01:52.307  Library numa found: YES
00:01:52.307  Has header "numaif.h" : YES 
00:01:52.307  Library fdt found: NO
00:01:52.307  Library execinfo found: NO
00:01:52.307  Has header "execinfo.h" : YES 
00:01:52.307  Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2
00:01:52.307  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:52.307  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:52.307  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:52.307  Run-time dependency openssl found: YES 3.0.2
00:01:52.307  Run-time dependency libpcap found: NO (tried pkgconfig)
00:01:52.307  Library pcap found: NO
00:01:52.307  Compiler for C supports arguments -Wcast-qual: YES 
00:01:52.307  Compiler for C supports arguments -Wdeprecated: YES 
00:01:52.307  Compiler for C supports arguments -Wformat: YES 
00:01:52.307  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:01:52.307  Compiler for C supports arguments -Wformat-security: YES 
00:01:52.307  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:52.307  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:52.307  Compiler for C supports arguments -Wnested-externs: YES 
00:01:52.307  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:52.307  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:52.307  Compiler for C supports arguments -Wsign-compare: YES 
00:01:52.307  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:52.307  Compiler for C supports arguments -Wundef: YES 
00:01:52.307  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:52.307  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:52.307  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:52.307  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:52.307  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:52.307  Program objdump found: YES (/usr/bin/objdump)
00:01:52.307  Compiler for C supports arguments -mavx512f: YES 
00:01:52.307  Checking if "AVX512 checking" compiles: YES 
00:01:52.307  Fetching value of define "__SSE4_2__" : 1 
00:01:52.307  Fetching value of define "__AES__" : 1 
00:01:52.307  Fetching value of define "__AVX__" : 1 
00:01:52.307  Fetching value of define "__AVX2__" : 1 
00:01:52.307  Fetching value of define "__AVX512BW__" : (undefined) 
00:01:52.307  Fetching value of define "__AVX512CD__" : (undefined) 
00:01:52.307  Fetching value of define "__AVX512DQ__" : (undefined) 
00:01:52.307  Fetching value of define "__AVX512F__" : (undefined) 
00:01:52.307  Fetching value of define "__AVX512VL__" : (undefined) 
00:01:52.307  Fetching value of define "__PCLMUL__" : 1 
00:01:52.307  Fetching value of define "__RDRND__" : 1 
00:01:52.307  Fetching value of define "__RDSEED__" : 1 
00:01:52.307  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:52.307  Fetching value of define "__znver1__" : (undefined) 
00:01:52.307  Fetching value of define "__znver2__" : (undefined) 
00:01:52.307  Fetching value of define "__znver3__" : (undefined) 
00:01:52.307  Fetching value of define "__znver4__" : (undefined) 
00:01:52.307  Library asan found: YES
00:01:52.307  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:52.307  Message: lib/log: Defining dependency "log"
00:01:52.307  Message: lib/kvargs: Defining dependency "kvargs"
00:01:52.307  Message: lib/telemetry: Defining dependency "telemetry"
00:01:52.307  Library rt found: YES
00:01:52.307  Checking for function "getentropy" : NO 
00:01:52.307  Message: lib/eal: Defining dependency "eal"
00:01:52.307  Message: lib/ring: Defining dependency "ring"
00:01:52.307  Message: lib/rcu: Defining dependency "rcu"
00:01:52.307  Message: lib/mempool: Defining dependency "mempool"
00:01:52.307  Message: lib/mbuf: Defining dependency "mbuf"
00:01:52.307  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:52.307  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:52.307  Compiler for C supports arguments -mpclmul: YES 
00:01:52.307  Compiler for C supports arguments -maes: YES 
00:01:52.307  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:52.307  Compiler for C supports arguments -mavx512bw: YES 
00:01:52.307  Compiler for C supports arguments -mavx512dq: YES 
00:01:52.307  Compiler for C supports arguments -mavx512vl: YES 
00:01:52.307  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:52.307  Compiler for C supports arguments -mavx2: YES 
00:01:52.307  Compiler for C supports arguments -mavx: YES 
00:01:52.307  Message: lib/net: Defining dependency "net"
00:01:52.307  Message: lib/meter: Defining dependency "meter"
00:01:52.307  Message: lib/ethdev: Defining dependency "ethdev"
00:01:52.307  Message: lib/pci: Defining dependency "pci"
00:01:52.307  Message: lib/cmdline: Defining dependency "cmdline"
00:01:52.307  Message: lib/hash: Defining dependency "hash"
00:01:52.307  Message: lib/timer: Defining dependency "timer"
00:01:52.307  Message: lib/compressdev: Defining dependency "compressdev"
00:01:52.307  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:52.307  Message: lib/dmadev: Defining dependency "dmadev"
00:01:52.307  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:52.307  Message: lib/power: Defining dependency "power"
00:01:52.307  Message: lib/reorder: Defining dependency "reorder"
00:01:52.307  Message: lib/security: Defining dependency "security"
00:01:52.307  Has header "linux/userfaultfd.h" : YES 
00:01:52.307  Has header "linux/vduse.h" : YES 
00:01:52.307  Message: lib/vhost: Defining dependency "vhost"
00:01:52.307  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:52.307  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:52.307  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:52.307  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:52.307  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:01:52.307  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:01:52.307  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:01:52.307  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:01:52.307  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:01:52.307  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:01:52.307  Program doxygen found: YES (/usr/bin/doxygen)
00:01:52.307  Configuring doxy-api-html.conf using configuration
00:01:52.307  Configuring doxy-api-man.conf using configuration
00:01:52.307  Program mandb found: YES (/usr/bin/mandb)
00:01:52.307  Program sphinx-build found: NO
00:01:52.307  Configuring rte_build_config.h using configuration
00:01:52.307  Message: 
00:01:52.307  =================
00:01:52.307  Applications Enabled
00:01:52.307  =================
00:01:52.307  
00:01:52.307  apps:
00:01:52.307  	
00:01:52.307  
00:01:52.307  Message: 
00:01:52.307  =================
00:01:52.307  Libraries Enabled
00:01:52.307  =================
00:01:52.307  
00:01:52.307  libs:
00:01:52.307  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:01:52.307  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:01:52.307  	cryptodev, dmadev, power, reorder, security, vhost, 
00:01:52.307  
00:01:52.307  Message: 
00:01:52.307  ===============
00:01:52.307  Drivers Enabled
00:01:52.307  ===============
00:01:52.307  
00:01:52.307  common:
00:01:52.307  	
00:01:52.307  bus:
00:01:52.307  	pci, vdev, 
00:01:52.307  mempool:
00:01:52.307  	ring, 
00:01:52.307  dma:
00:01:52.307  	
00:01:52.307  net:
00:01:52.307  	
00:01:52.307  crypto:
00:01:52.307  	
00:01:52.307  compress:
00:01:52.307  	
00:01:52.307  vdpa:
00:01:52.307  	
00:01:52.307  
00:01:52.307  Message: 
00:01:52.307  =================
00:01:52.307  Content Skipped
00:01:52.307  =================
00:01:52.307  
00:01:52.307  apps:
00:01:52.307  	dumpcap:	explicitly disabled via build config
00:01:52.307  	graph:	explicitly disabled via build config
00:01:52.307  	pdump:	explicitly disabled via build config
00:01:52.307  	proc-info:	explicitly disabled via build config
00:01:52.307  	test-acl:	explicitly disabled via build config
00:01:52.307  	test-bbdev:	explicitly disabled via build config
00:01:52.307  	test-cmdline:	explicitly disabled via build config
00:01:52.307  	test-compress-perf:	explicitly disabled via build config
00:01:52.307  	test-crypto-perf:	explicitly disabled via build config
00:01:52.307  	test-dma-perf:	explicitly disabled via build config
00:01:52.307  	test-eventdev:	explicitly disabled via build config
00:01:52.307  	test-fib:	explicitly disabled via build config
00:01:52.307  	test-flow-perf:	explicitly disabled via build config
00:01:52.307  	test-gpudev:	explicitly disabled via build config
00:01:52.307  	test-mldev:	explicitly disabled via build config
00:01:52.307  	test-pipeline:	explicitly disabled via build config
00:01:52.307  	test-pmd:	explicitly disabled via build config
00:01:52.307  	test-regex:	explicitly disabled via build config
00:01:52.307  	test-sad:	explicitly disabled via build config
00:01:52.307  	test-security-perf:	explicitly disabled via build config
00:01:52.307  	
00:01:52.307  libs:
00:01:52.308  	metrics:	explicitly disabled via build config
00:01:52.308  	acl:	explicitly disabled via build config
00:01:52.308  	bbdev:	explicitly disabled via build config
00:01:52.308  	bitratestats:	explicitly disabled via build config
00:01:52.308  	bpf:	explicitly disabled via build config
00:01:52.308  	cfgfile:	explicitly disabled via build config
00:01:52.308  	distributor:	explicitly disabled via build config
00:01:52.308  	efd:	explicitly disabled via build config
00:01:52.308  	eventdev:	explicitly disabled via build config
00:01:52.308  	dispatcher:	explicitly disabled via build config
00:01:52.308  	gpudev:	explicitly disabled via build config
00:01:52.308  	gro:	explicitly disabled via build config
00:01:52.308  	gso:	explicitly disabled via build config
00:01:52.308  	ip_frag:	explicitly disabled via build config
00:01:52.308  	jobstats:	explicitly disabled via build config
00:01:52.308  	latencystats:	explicitly disabled via build config
00:01:52.308  	lpm:	explicitly disabled via build config
00:01:52.308  	member:	explicitly disabled via build config
00:01:52.308  	pcapng:	explicitly disabled via build config
00:01:52.308  	rawdev:	explicitly disabled via build config
00:01:52.308  	regexdev:	explicitly disabled via build config
00:01:52.308  	mldev:	explicitly disabled via build config
00:01:52.308  	rib:	explicitly disabled via build config
00:01:52.308  	sched:	explicitly disabled via build config
00:01:52.308  	stack:	explicitly disabled via build config
00:01:52.308  	ipsec:	explicitly disabled via build config
00:01:52.308  	pdcp:	explicitly disabled via build config
00:01:52.308  	fib:	explicitly disabled via build config
00:01:52.308  	port:	explicitly disabled via build config
00:01:52.308  	pdump:	explicitly disabled via build config
00:01:52.308  	table:	explicitly disabled via build config
00:01:52.308  	pipeline:	explicitly disabled via build config
00:01:52.308  	graph:	explicitly disabled via build config
00:01:52.308  	node:	explicitly disabled via build config
00:01:52.308  	
00:01:52.308  drivers:
00:01:52.308  	common/cpt:	not in enabled drivers build config
00:01:52.308  	common/dpaax:	not in enabled drivers build config
00:01:52.308  	common/iavf:	not in enabled drivers build config
00:01:52.308  	common/idpf:	not in enabled drivers build config
00:01:52.308  	common/mvep:	not in enabled drivers build config
00:01:52.308  	common/octeontx:	not in enabled drivers build config
00:01:52.308  	bus/auxiliary:	not in enabled drivers build config
00:01:52.308  	bus/cdx:	not in enabled drivers build config
00:01:52.308  	bus/dpaa:	not in enabled drivers build config
00:01:52.308  	bus/fslmc:	not in enabled drivers build config
00:01:52.308  	bus/ifpga:	not in enabled drivers build config
00:01:52.308  	bus/platform:	not in enabled drivers build config
00:01:52.308  	bus/vmbus:	not in enabled drivers build config
00:01:52.308  	common/cnxk:	not in enabled drivers build config
00:01:52.308  	common/mlx5:	not in enabled drivers build config
00:01:52.308  	common/nfp:	not in enabled drivers build config
00:01:52.308  	common/qat:	not in enabled drivers build config
00:01:52.308  	common/sfc_efx:	not in enabled drivers build config
00:01:52.308  	mempool/bucket:	not in enabled drivers build config
00:01:52.308  	mempool/cnxk:	not in enabled drivers build config
00:01:52.308  	mempool/dpaa:	not in enabled drivers build config
00:01:52.308  	mempool/dpaa2:	not in enabled drivers build config
00:01:52.308  	mempool/octeontx:	not in enabled drivers build config
00:01:52.308  	mempool/stack:	not in enabled drivers build config
00:01:52.308  	dma/cnxk:	not in enabled drivers build config
00:01:52.308  	dma/dpaa:	not in enabled drivers build config
00:01:52.308  	dma/dpaa2:	not in enabled drivers build config
00:01:52.308  	dma/hisilicon:	not in enabled drivers build config
00:01:52.308  	dma/idxd:	not in enabled drivers build config
00:01:52.308  	dma/ioat:	not in enabled drivers build config
00:01:52.308  	dma/skeleton:	not in enabled drivers build config
00:01:52.308  	net/af_packet:	not in enabled drivers build config
00:01:52.308  	net/af_xdp:	not in enabled drivers build config
00:01:52.308  	net/ark:	not in enabled drivers build config
00:01:52.308  	net/atlantic:	not in enabled drivers build config
00:01:52.308  	net/avp:	not in enabled drivers build config
00:01:52.308  	net/axgbe:	not in enabled drivers build config
00:01:52.308  	net/bnx2x:	not in enabled drivers build config
00:01:52.308  	net/bnxt:	not in enabled drivers build config
00:01:52.308  	net/bonding:	not in enabled drivers build config
00:01:52.308  	net/cnxk:	not in enabled drivers build config
00:01:52.308  	net/cpfl:	not in enabled drivers build config
00:01:52.308  	net/cxgbe:	not in enabled drivers build config
00:01:52.308  	net/dpaa:	not in enabled drivers build config
00:01:52.308  	net/dpaa2:	not in enabled drivers build config
00:01:52.308  	net/e1000:	not in enabled drivers build config
00:01:52.308  	net/ena:	not in enabled drivers build config
00:01:52.308  	net/enetc:	not in enabled drivers build config
00:01:52.308  	net/enetfec:	not in enabled drivers build config
00:01:52.308  	net/enic:	not in enabled drivers build config
00:01:52.308  	net/failsafe:	not in enabled drivers build config
00:01:52.308  	net/fm10k:	not in enabled drivers build config
00:01:52.308  	net/gve:	not in enabled drivers build config
00:01:52.308  	net/hinic:	not in enabled drivers build config
00:01:52.308  	net/hns3:	not in enabled drivers build config
00:01:52.308  	net/i40e:	not in enabled drivers build config
00:01:52.308  	net/iavf:	not in enabled drivers build config
00:01:52.308  	net/ice:	not in enabled drivers build config
00:01:52.308  	net/idpf:	not in enabled drivers build config
00:01:52.308  	net/igc:	not in enabled drivers build config
00:01:52.308  	net/ionic:	not in enabled drivers build config
00:01:52.308  	net/ipn3ke:	not in enabled drivers build config
00:01:52.308  	net/ixgbe:	not in enabled drivers build config
00:01:52.308  	net/mana:	not in enabled drivers build config
00:01:52.308  	net/memif:	not in enabled drivers build config
00:01:52.308  	net/mlx4:	not in enabled drivers build config
00:01:52.308  	net/mlx5:	not in enabled drivers build config
00:01:52.308  	net/mvneta:	not in enabled drivers build config
00:01:52.308  	net/mvpp2:	not in enabled drivers build config
00:01:52.308  	net/netvsc:	not in enabled drivers build config
00:01:52.308  	net/nfb:	not in enabled drivers build config
00:01:52.308  	net/nfp:	not in enabled drivers build config
00:01:52.308  	net/ngbe:	not in enabled drivers build config
00:01:52.308  	net/null:	not in enabled drivers build config
00:01:52.308  	net/octeontx:	not in enabled drivers build config
00:01:52.308  	net/octeon_ep:	not in enabled drivers build config
00:01:52.308  	net/pcap:	not in enabled drivers build config
00:01:52.308  	net/pfe:	not in enabled drivers build config
00:01:52.308  	net/qede:	not in enabled drivers build config
00:01:52.308  	net/ring:	not in enabled drivers build config
00:01:52.308  	net/sfc:	not in enabled drivers build config
00:01:52.308  	net/softnic:	not in enabled drivers build config
00:01:52.308  	net/tap:	not in enabled drivers build config
00:01:52.308  	net/thunderx:	not in enabled drivers build config
00:01:52.308  	net/txgbe:	not in enabled drivers build config
00:01:52.308  	net/vdev_netvsc:	not in enabled drivers build config
00:01:52.308  	net/vhost:	not in enabled drivers build config
00:01:52.308  	net/virtio:	not in enabled drivers build config
00:01:52.308  	net/vmxnet3:	not in enabled drivers build config
00:01:52.308  	raw/*:	missing internal dependency, "rawdev"
00:01:52.308  	crypto/armv8:	not in enabled drivers build config
00:01:52.308  	crypto/bcmfs:	not in enabled drivers build config
00:01:52.308  	crypto/caam_jr:	not in enabled drivers build config
00:01:52.308  	crypto/ccp:	not in enabled drivers build config
00:01:52.308  	crypto/cnxk:	not in enabled drivers build config
00:01:52.308  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:52.308  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:52.308  	crypto/ipsec_mb:	not in enabled drivers build config
00:01:52.308  	crypto/mlx5:	not in enabled drivers build config
00:01:52.308  	crypto/mvsam:	not in enabled drivers build config
00:01:52.308  	crypto/nitrox:	not in enabled drivers build config
00:01:52.308  	crypto/null:	not in enabled drivers build config
00:01:52.308  	crypto/octeontx:	not in enabled drivers build config
00:01:52.308  	crypto/openssl:	not in enabled drivers build config
00:01:52.308  	crypto/scheduler:	not in enabled drivers build config
00:01:52.308  	crypto/uadk:	not in enabled drivers build config
00:01:52.308  	crypto/virtio:	not in enabled drivers build config
00:01:52.308  	compress/isal:	not in enabled drivers build config
00:01:52.308  	compress/mlx5:	not in enabled drivers build config
00:01:52.308  	compress/octeontx:	not in enabled drivers build config
00:01:52.308  	compress/zlib:	not in enabled drivers build config
00:01:52.308  	regex/*:	missing internal dependency, "regexdev"
00:01:52.308  	ml/*:	missing internal dependency, "mldev"
00:01:52.308  	vdpa/ifc:	not in enabled drivers build config
00:01:52.308  	vdpa/mlx5:	not in enabled drivers build config
00:01:52.308  	vdpa/nfp:	not in enabled drivers build config
00:01:52.308  	vdpa/sfc:	not in enabled drivers build config
00:01:52.308  	event/*:	missing internal dependency, "eventdev"
00:01:52.308  	baseband/*:	missing internal dependency, "bbdev"
00:01:52.308  	gpu/*:	missing internal dependency, "gpudev"
00:01:52.308  	
00:01:52.308  
00:01:52.308  Build targets in project: 85
00:01:52.308  
00:01:52.308  DPDK 23.11.0
00:01:52.308  
00:01:52.308    User defined options
00:01:52.308      buildtype          : debug
00:01:52.308      default_library    : static
00:01:52.308      libdir             : lib
00:01:52.308      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:01:52.308      b_sanitize         : address
00:01:52.308      c_args             : -fPIC -Werror  -Wno-stringop-overflow -fcommon
00:01:52.308      c_link_args        : 
00:01:52.308      cpu_instruction_set: native
00:01:52.308      disable_apps       : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf
00:01:52.308      disable_libs       : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev
00:01:52.308      enable_docs        : false
00:01:52.308      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring
00:01:52.308      enable_kmods       : false
00:01:52.308      tests              : false
00:01:52.308  
00:01:52.308  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:52.308  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:01:52.308  [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:52.308  [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:01:52.308  [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:52.308  [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:52.308  [5/265] Linking static target lib/librte_kvargs.a
00:01:52.308  [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:52.308  [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o
00:01:52.308  [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:52.308  [9/265] Linking static target lib/librte_log.a
00:01:52.309  [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:52.309  [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:52.309  [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:52.309  [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:52.309  [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:52.309  [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:52.309  [16/265] Linking static target lib/librte_telemetry.a
00:01:52.309  [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:52.309  [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:52.309  [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:52.309  [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:52.309  [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:52.309  [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:52.309  [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:52.309  [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:52.309  [25/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:52.309  [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:52.309  [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:52.568  [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:52.568  [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:52.568  [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:52.568  [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:52.568  [32/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:52.568  [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:52.568  [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:52.568  [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:52.827  [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:52.827  [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:52.827  [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:52.827  [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:52.827  [40/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:52.827  [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:52.827  [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:52.827  [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:52.827  [44/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:01:53.086  [45/265] Linking target lib/librte_log.so.24.0
00:01:53.086  [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:53.086  [47/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:01:53.086  [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:53.086  [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:53.086  [50/265] Linking target lib/librte_kvargs.so.24.0
00:01:53.086  [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:53.086  [52/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:53.086  [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:53.086  [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:53.086  [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:53.086  [56/265] Linking target lib/librte_telemetry.so.24.0
00:01:53.345  [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:53.345  [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:53.345  [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:53.345  [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:53.345  [61/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:01:53.345  [62/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:01:53.345  [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:53.345  [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:53.604  [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:53.604  [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:53.604  [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:53.604  [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:53.604  [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:53.604  [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:53.604  [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:53.604  [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:53.604  [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:53.604  [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:53.604  [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:53.604  [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:53.604  [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:53.863  [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:53.863  [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:53.863  [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:53.863  [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:54.122  [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:54.122  [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:54.122  [84/265] Linking static target lib/librte_ring.a
00:01:54.122  [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:54.122  [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:54.122  [87/265] Linking static target lib/librte_eal.a
00:01:54.122  [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:54.381  [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:54.381  [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:54.381  [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:54.381  [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:54.381  [93/265] Linking static target lib/librte_mempool.a
00:01:54.381  [94/265] Linking static target lib/librte_rcu.a
00:01:54.381  [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:54.381  [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:54.381  [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:54.638  [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:54.638  [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:54.638  [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:54.638  [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:54.638  [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:54.638  [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:54.638  [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:54.896  [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:54.896  [106/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:01:54.896  [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:54.896  [108/265] Linking static target lib/librte_net.a
00:01:54.896  [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:54.896  [110/265] Linking static target lib/librte_mbuf.a
00:01:54.896  [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:54.896  [112/265] Linking static target lib/librte_meter.a
00:01:55.154  [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:01:55.154  [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:55.154  [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.154  [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.154  [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:01:55.154  [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:01:55.412  [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:01:55.670  [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.670  [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:01:55.670  [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:01:55.670  [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:01:55.670  [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:01:55.670  [125/265] Linking static target lib/librte_pci.a
00:01:55.670  [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:01:55.928  [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:01:55.928  [128/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.928  [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:01:55.928  [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:01:55.928  [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:01:55.928  [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:01:55.928  [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:01:55.928  [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:01:55.928  [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:01:55.928  [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:01:55.928  [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:01:56.188  [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:01:56.188  [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:01:56.188  [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:01:56.188  [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:01:56.188  [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:01:56.188  [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:01:56.188  [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:01:56.188  [145/265] Linking static target lib/librte_cmdline.a
00:01:56.446  [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:01:56.446  [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:01:56.446  [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:01:56.446  [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:01:56.446  [150/265] Linking static target lib/librte_timer.a
00:01:56.446  [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:01:56.705  [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:01:56.963  [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:01:56.963  [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:01:56.963  [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:01:56.963  [156/265] Linking static target lib/librte_ethdev.a
00:01:56.963  [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:01:56.963  [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:01:56.963  [159/265] Linking static target lib/librte_compressdev.a
00:01:56.963  [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:01:56.963  [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:01:57.222  [162/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:01:57.222  [163/265] Linking static target lib/librte_hash.a
00:01:57.222  [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:01:57.222  [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:01:57.222  [166/265] Linking static target lib/librte_dmadev.a
00:01:57.222  [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.222  [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:01:57.222  [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:01:57.480  [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:01:57.480  [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.480  [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.480  [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:01:57.739  [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.739  [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:01:57.739  [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:01:57.739  [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:01:57.739  [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:01:57.739  [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:01:57.997  [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:01:57.997  [181/265] Linking static target lib/librte_cryptodev.a
00:01:57.997  [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:01:57.997  [183/265] Linking static target lib/librte_power.a
00:01:57.997  [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:01:57.997  [185/265] Linking static target lib/librte_reorder.a
00:01:58.256  [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:01:58.256  [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:01:58.256  [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:01:58.256  [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:01:58.256  [190/265] Linking static target lib/librte_security.a
00:01:58.256  [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.514  [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:01:58.772  [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.772  [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.772  [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:01:58.772  [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:01:59.030  [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:01:59.030  [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:01:59.030  [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:01:59.030  [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.030  [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:01:59.289  [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:01:59.289  [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:01:59.547  [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:01:59.547  [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:01:59.547  [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a
00:01:59.547  [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:01:59.547  [208/265] Linking static target drivers/libtmp_rte_bus_pci.a
00:01:59.547  [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:01:59.547  [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:59.547  [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:59.547  [212/265] Linking static target drivers/librte_bus_vdev.a
00:01:59.806  [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:01:59.806  [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:01:59.806  [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a
00:01:59.806  [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:59.806  [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:59.806  [218/265] Linking static target drivers/librte_bus_pci.a
00:01:59.806  [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.065  [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:00.065  [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:00.065  [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:00.065  [223/265] Linking static target drivers/librte_mempool_ring.a
00:02:00.324  [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.701  [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:01.701  [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.701  [227/265] Linking target lib/librte_eal.so.24.0
00:02:01.959  [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:02:01.959  [229/265] Linking target lib/librte_dmadev.so.24.0
00:02:01.959  [230/265] Linking target lib/librte_meter.so.24.0
00:02:01.959  [231/265] Linking target lib/librte_timer.so.24.0
00:02:01.959  [232/265] Linking target drivers/librte_bus_vdev.so.24.0
00:02:01.959  [233/265] Linking target lib/librte_ring.so.24.0
00:02:01.959  [234/265] Linking target lib/librte_pci.so.24.0
00:02:02.217  [235/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:02:02.217  [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:02:02.217  [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:02:02.217  [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:02:02.217  [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:02:02.217  [240/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.217  [241/265] Linking target lib/librte_rcu.so.24.0
00:02:02.217  [242/265] Linking target lib/librte_mempool.so.24.0
00:02:02.217  [243/265] Linking target drivers/librte_bus_pci.so.24.0
00:02:02.217  [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:02:02.217  [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:02:02.476  [246/265] Linking target drivers/librte_mempool_ring.so.24.0
00:02:02.476  [247/265] Linking target lib/librte_mbuf.so.24.0
00:02:02.476  [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:02:02.476  [249/265] Linking target lib/librte_compressdev.so.24.0
00:02:02.476  [250/265] Linking target lib/librte_net.so.24.0
00:02:02.476  [251/265] Linking target lib/librte_reorder.so.24.0
00:02:02.476  [252/265] Linking target lib/librte_cryptodev.so.24.0
00:02:02.735  [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:02:02.735  [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:02:02.735  [255/265] Linking target lib/librte_security.so.24.0
00:02:02.735  [256/265] Linking target lib/librte_cmdline.so.24.0
00:02:02.735  [257/265] Linking target lib/librte_hash.so.24.0
00:02:02.735  [258/265] Linking target lib/librte_ethdev.so.24.0
00:02:02.994  [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:02:02.994  [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:02:02.994  [261/265] Linking target lib/librte_power.so.24.0
00:02:04.403  [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:04.403  [263/265] Linking static target lib/librte_vhost.a
00:02:06.305  [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.305  [265/265] Linking target lib/librte_vhost.so.24.0
00:02:06.305  INFO: autodetecting backend as ninja
00:02:06.305  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:02:07.240    CC lib/ut_mock/mock.o
00:02:07.240    CC lib/ut/ut.o
00:02:07.240    CC lib/log/log.o
00:02:07.240    CC lib/log/log_flags.o
00:02:07.240    CC lib/log/log_deprecated.o
00:02:07.240    LIB libspdk_ut_mock.a
00:02:07.240    LIB libspdk_log.a
00:02:07.240    LIB libspdk_ut.a
00:02:07.499    CC lib/util/base64.o
00:02:07.499    CC lib/util/bit_array.o
00:02:07.499    CC lib/util/cpuset.o
00:02:07.499    CC lib/util/crc32.o
00:02:07.499    CC lib/dma/dma.o
00:02:07.499    CC lib/util/crc16.o
00:02:07.499    CXX lib/trace_parser/trace.o
00:02:07.499    CC lib/util/crc32c.o
00:02:07.499    CC lib/ioat/ioat.o
00:02:07.499    CC lib/vfio_user/host/vfio_user_pci.o
00:02:07.758    CC lib/util/crc32_ieee.o
00:02:07.758    CC lib/util/crc64.o
00:02:07.758    CC lib/util/dif.o
00:02:07.758    CC lib/util/fd.o
00:02:07.758    LIB libspdk_dma.a
00:02:07.758    CC lib/vfio_user/host/vfio_user.o
00:02:07.758    CC lib/util/file.o
00:02:07.758    CC lib/util/hexlify.o
00:02:07.758    CC lib/util/iov.o
00:02:07.758    CC lib/util/math.o
00:02:07.758    CC lib/util/pipe.o
00:02:07.758    LIB libspdk_ioat.a
00:02:07.758    CC lib/util/strerror_tls.o
00:02:08.017    CC lib/util/string.o
00:02:08.017    CC lib/util/uuid.o
00:02:08.017    CC lib/util/fd_group.o
00:02:08.017    LIB libspdk_vfio_user.a
00:02:08.017    CC lib/util/xor.o
00:02:08.017    CC lib/util/zipf.o
00:02:08.276    LIB libspdk_util.a
00:02:08.534    CC lib/rdma/common.o
00:02:08.534    CC lib/rdma/rdma_verbs.o
00:02:08.534    LIB libspdk_trace_parser.a
00:02:08.534    CC lib/vmd/vmd.o
00:02:08.534    CC lib/vmd/led.o
00:02:08.534    CC lib/idxd/idxd.o
00:02:08.534    CC lib/idxd/idxd_user.o
00:02:08.534    CC lib/env_dpdk/env.o
00:02:08.534    CC lib/conf/conf.o
00:02:08.534    CC lib/json/json_parse.o
00:02:08.534    CC lib/env_dpdk/memory.o
00:02:08.534    CC lib/env_dpdk/pci.o
00:02:08.534    CC lib/env_dpdk/init.o
00:02:08.534    CC lib/json/json_util.o
00:02:08.793    LIB libspdk_rdma.a
00:02:08.793    LIB libspdk_conf.a
00:02:08.793    CC lib/json/json_write.o
00:02:08.793    CC lib/env_dpdk/threads.o
00:02:08.793    CC lib/env_dpdk/pci_ioat.o
00:02:08.793    CC lib/env_dpdk/pci_virtio.o
00:02:08.793    CC lib/env_dpdk/pci_vmd.o
00:02:09.052    CC lib/env_dpdk/pci_idxd.o
00:02:09.052    CC lib/env_dpdk/pci_event.o
00:02:09.052    CC lib/env_dpdk/sigbus_handler.o
00:02:09.052    LIB libspdk_idxd.a
00:02:09.052    LIB libspdk_json.a
00:02:09.052    CC lib/env_dpdk/pci_dpdk.o
00:02:09.052    CC lib/env_dpdk/pci_dpdk_2207.o
00:02:09.052    CC lib/env_dpdk/pci_dpdk_2211.o
00:02:09.052    LIB libspdk_vmd.a
00:02:09.052    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:02:09.052    CC lib/jsonrpc/jsonrpc_server.o
00:02:09.052    CC lib/jsonrpc/jsonrpc_client.o
00:02:09.052    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:02:09.310    LIB libspdk_jsonrpc.a
00:02:09.569    CC lib/rpc/rpc.o
00:02:09.828    LIB libspdk_rpc.a
00:02:09.828    LIB libspdk_env_dpdk.a
00:02:09.828    CC lib/notify/notify.o
00:02:09.828    CC lib/notify/notify_rpc.o
00:02:09.828    CC lib/sock/sock.o
00:02:09.828    CC lib/sock/sock_rpc.o
00:02:09.828    CC lib/trace/trace.o
00:02:09.828    CC lib/trace/trace_flags.o
00:02:09.828    CC lib/trace/trace_rpc.o
00:02:10.087    LIB libspdk_notify.a
00:02:10.087    LIB libspdk_trace.a
00:02:10.346    LIB libspdk_sock.a
00:02:10.346    CC lib/thread/iobuf.o
00:02:10.346    CC lib/thread/thread.o
00:02:10.346    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:10.346    CC lib/nvme/nvme_ctrlr.o
00:02:10.346    CC lib/nvme/nvme_fabric.o
00:02:10.346    CC lib/nvme/nvme_ns_cmd.o
00:02:10.346    CC lib/nvme/nvme_ns.o
00:02:10.346    CC lib/nvme/nvme_pcie_common.o
00:02:10.346    CC lib/nvme/nvme_pcie.o
00:02:10.346    CC lib/nvme/nvme_qpair.o
00:02:10.604    CC lib/nvme/nvme.o
00:02:10.875    CC lib/nvme/nvme_quirks.o
00:02:11.152    CC lib/nvme/nvme_transport.o
00:02:11.152    CC lib/nvme/nvme_discovery.o
00:02:11.152    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:11.152    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:11.152    CC lib/nvme/nvme_tcp.o
00:02:11.411    CC lib/nvme/nvme_opal.o
00:02:11.411    CC lib/nvme/nvme_io_msg.o
00:02:11.411    CC lib/nvme/nvme_poll_group.o
00:02:11.411    CC lib/nvme/nvme_zns.o
00:02:11.669    CC lib/nvme/nvme_cuse.o
00:02:11.669    CC lib/nvme/nvme_vfio_user.o
00:02:11.669    CC lib/nvme/nvme_rdma.o
00:02:11.928    LIB libspdk_thread.a
00:02:11.928    CC lib/virtio/virtio_vhost_user.o
00:02:11.928    CC lib/virtio/virtio.o
00:02:11.928    CC lib/blob/blobstore.o
00:02:11.928    CC lib/accel/accel.o
00:02:11.928    CC lib/init/json_config.o
00:02:12.186    CC lib/blob/request.o
00:02:12.186    CC lib/init/subsystem.o
00:02:12.186    CC lib/init/subsystem_rpc.o
00:02:12.445    CC lib/accel/accel_rpc.o
00:02:12.445    CC lib/virtio/virtio_vfio_user.o
00:02:12.445    CC lib/accel/accel_sw.o
00:02:12.445    CC lib/blob/zeroes.o
00:02:12.445    CC lib/init/rpc.o
00:02:12.445    CC lib/virtio/virtio_pci.o
00:02:12.703    LIB libspdk_init.a
00:02:12.703    CC lib/blob/blob_bs_dev.o
00:02:12.703    CC lib/event/app.o
00:02:12.703    CC lib/event/reactor.o
00:02:12.703    CC lib/event/log_rpc.o
00:02:12.703    CC lib/event/app_rpc.o
00:02:12.703    CC lib/event/scheduler_static.o
00:02:12.962    LIB libspdk_virtio.a
00:02:12.962    LIB libspdk_nvme.a
00:02:12.962    LIB libspdk_accel.a
00:02:13.221    LIB libspdk_event.a
00:02:13.221    CC lib/bdev/bdev.o
00:02:13.221    CC lib/bdev/bdev_rpc.o
00:02:13.221    CC lib/bdev/bdev_zone.o
00:02:13.221    CC lib/bdev/part.o
00:02:13.221    CC lib/bdev/scsi_nvme.o
00:02:15.123    LIB libspdk_blob.a
00:02:15.123    CC lib/lvol/lvol.o
00:02:15.123    CC lib/blobfs/blobfs.o
00:02:15.123    CC lib/blobfs/tree.o
00:02:15.689    LIB libspdk_bdev.a
00:02:15.947    CC lib/scsi/dev.o
00:02:15.947    CC lib/scsi/port.o
00:02:15.947    CC lib/scsi/lun.o
00:02:15.947    CC lib/scsi/scsi.o
00:02:15.947    CC lib/nbd/nbd.o
00:02:15.947    CC lib/scsi/scsi_bdev.o
00:02:15.947    CC lib/nvmf/ctrlr.o
00:02:15.947    CC lib/ftl/ftl_core.o
00:02:15.947    LIB libspdk_lvol.a
00:02:15.947    CC lib/ftl/ftl_init.o
00:02:15.947    LIB libspdk_blobfs.a
00:02:16.205    CC lib/nvmf/ctrlr_discovery.o
00:02:16.205    CC lib/nvmf/ctrlr_bdev.o
00:02:16.205    CC lib/nvmf/subsystem.o
00:02:16.205    CC lib/nvmf/nvmf.o
00:02:16.205    CC lib/scsi/scsi_pr.o
00:02:16.205    CC lib/scsi/scsi_rpc.o
00:02:16.463    CC lib/ftl/ftl_layout.o
00:02:16.463    CC lib/nbd/nbd_rpc.o
00:02:16.463    CC lib/ftl/ftl_debug.o
00:02:16.463    CC lib/scsi/task.o
00:02:16.463    CC lib/ftl/ftl_io.o
00:02:16.463    LIB libspdk_nbd.a
00:02:16.722    CC lib/nvmf/nvmf_rpc.o
00:02:16.722    CC lib/nvmf/transport.o
00:02:16.722    CC lib/nvmf/tcp.o
00:02:16.722    CC lib/nvmf/rdma.o
00:02:16.722    LIB libspdk_scsi.a
00:02:16.722    CC lib/ftl/ftl_sb.o
00:02:16.722    CC lib/ftl/ftl_l2p.o
00:02:16.980    CC lib/ftl/ftl_l2p_flat.o
00:02:16.980    CC lib/ftl/ftl_nv_cache.o
00:02:16.980    CC lib/iscsi/conn.o
00:02:16.980    CC lib/iscsi/init_grp.o
00:02:16.980    CC lib/iscsi/iscsi.o
00:02:17.239    CC lib/ftl/ftl_band.o
00:02:17.239    CC lib/ftl/ftl_band_ops.o
00:02:17.497    CC lib/vhost/vhost.o
00:02:17.497    CC lib/ftl/ftl_writer.o
00:02:17.755    CC lib/ftl/ftl_rq.o
00:02:17.755    CC lib/ftl/ftl_reloc.o
00:02:17.755    CC lib/vhost/vhost_rpc.o
00:02:17.755    CC lib/vhost/vhost_scsi.o
00:02:17.755    CC lib/vhost/vhost_blk.o
00:02:17.755    CC lib/vhost/rte_vhost_user.o
00:02:17.755    CC lib/iscsi/md5.o
00:02:18.013    CC lib/iscsi/param.o
00:02:18.013    CC lib/ftl/ftl_l2p_cache.o
00:02:18.013    CC lib/ftl/ftl_p2l.o
00:02:18.271    CC lib/ftl/mngt/ftl_mngt.o
00:02:18.271    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:18.271    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:18.528    CC lib/iscsi/portal_grp.o
00:02:18.528    CC lib/iscsi/tgt_node.o
00:02:18.528    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:18.528    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:18.528    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:18.528    CC lib/iscsi/iscsi_subsystem.o
00:02:18.528    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:18.787    CC lib/iscsi/iscsi_rpc.o
00:02:18.787    LIB libspdk_vhost.a
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:18.787    CC lib/iscsi/task.o
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:18.787    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:18.787    CC lib/ftl/utils/ftl_conf.o
00:02:19.045    LIB libspdk_nvmf.a
00:02:19.045    CC lib/ftl/utils/ftl_md.o
00:02:19.045    CC lib/ftl/utils/ftl_mempool.o
00:02:19.045    CC lib/ftl/utils/ftl_bitmap.o
00:02:19.045    CC lib/ftl/utils/ftl_property.o
00:02:19.045    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:19.045    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:19.045    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:19.045    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:19.045    LIB libspdk_iscsi.a
00:02:19.304    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:19.304    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:19.304    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:19.304    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:19.304    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:19.304    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:19.304    CC lib/ftl/base/ftl_base_dev.o
00:02:19.304    CC lib/ftl/base/ftl_base_bdev.o
00:02:19.304    CC lib/ftl/ftl_trace.o
00:02:19.562    LIB libspdk_ftl.a
00:02:19.821    CC module/env_dpdk/env_dpdk_rpc.o
00:02:19.821    CC module/blob/bdev/blob_bdev.o
00:02:19.821    CC module/scheduler/gscheduler/gscheduler.o
00:02:19.821    CC module/accel/iaa/accel_iaa.o
00:02:19.821    CC module/accel/ioat/accel_ioat.o
00:02:19.821    CC module/accel/error/accel_error.o
00:02:19.821    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:19.821    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:19.821    CC module/sock/posix/posix.o
00:02:19.821    CC module/accel/dsa/accel_dsa.o
00:02:20.080    LIB libspdk_env_dpdk_rpc.a
00:02:20.080    LIB libspdk_scheduler_gscheduler.a
00:02:20.080    LIB libspdk_scheduler_dpdk_governor.a
00:02:20.080    CC module/accel/error/accel_error_rpc.o
00:02:20.080    CC module/accel/ioat/accel_ioat_rpc.o
00:02:20.080    CC module/accel/dsa/accel_dsa_rpc.o
00:02:20.080    CC module/accel/iaa/accel_iaa_rpc.o
00:02:20.080    LIB libspdk_scheduler_dynamic.a
00:02:20.080    LIB libspdk_blob_bdev.a
00:02:20.338    LIB libspdk_accel_error.a
00:02:20.338    LIB libspdk_accel_ioat.a
00:02:20.338    LIB libspdk_accel_iaa.a
00:02:20.338    LIB libspdk_accel_dsa.a
00:02:20.338    CC module/bdev/gpt/gpt.o
00:02:20.338    CC module/bdev/lvol/vbdev_lvol.o
00:02:20.338    CC module/blobfs/bdev/blobfs_bdev.o
00:02:20.338    CC module/bdev/delay/vbdev_delay.o
00:02:20.338    CC module/bdev/null/bdev_null.o
00:02:20.338    CC module/bdev/malloc/bdev_malloc.o
00:02:20.338    CC module/bdev/error/vbdev_error.o
00:02:20.338    CC module/bdev/nvme/bdev_nvme.o
00:02:20.338    CC module/bdev/passthru/vbdev_passthru.o
00:02:20.597    CC module/bdev/gpt/vbdev_gpt.o
00:02:20.597    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:02:20.597    CC module/bdev/null/bdev_null_rpc.o
00:02:20.597    CC module/bdev/error/vbdev_error_rpc.o
00:02:20.597    CC module/bdev/delay/vbdev_delay_rpc.o
00:02:20.597    LIB libspdk_blobfs_bdev.a
00:02:20.597    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:02:20.856    CC module/bdev/nvme/bdev_nvme_rpc.o
00:02:20.856    CC module/bdev/malloc/bdev_malloc_rpc.o
00:02:20.856    LIB libspdk_bdev_gpt.a
00:02:20.856    LIB libspdk_bdev_error.a
00:02:20.856    LIB libspdk_bdev_null.a
00:02:20.856    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:02:20.856    LIB libspdk_sock_posix.a
00:02:20.856    LIB libspdk_bdev_delay.a
00:02:20.856    LIB libspdk_bdev_passthru.a
00:02:20.856    CC module/bdev/nvme/nvme_rpc.o
00:02:20.856    LIB libspdk_bdev_malloc.a
00:02:20.856    CC module/bdev/split/vbdev_split.o
00:02:20.856    CC module/bdev/zone_block/vbdev_zone_block.o
00:02:20.856    CC module/bdev/raid/bdev_raid.o
00:02:20.856    CC module/bdev/raid/bdev_raid_rpc.o
00:02:21.114    CC module/bdev/ftl/bdev_ftl.o
00:02:21.114    CC module/bdev/aio/bdev_aio.o
00:02:21.114    CC module/bdev/split/vbdev_split_rpc.o
00:02:21.114    CC module/bdev/ftl/bdev_ftl_rpc.o
00:02:21.114    LIB libspdk_bdev_lvol.a
00:02:21.114    CC module/bdev/raid/bdev_raid_sb.o
00:02:21.373    CC module/bdev/raid/raid0.o
00:02:21.373    CC module/bdev/raid/raid1.o
00:02:21.373    LIB libspdk_bdev_split.a
00:02:21.373    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:02:21.373    CC module/bdev/raid/concat.o
00:02:21.373    CC module/bdev/aio/bdev_aio_rpc.o
00:02:21.373    CC module/bdev/raid/raid5f.o
00:02:21.373    LIB libspdk_bdev_ftl.a
00:02:21.373    LIB libspdk_bdev_zone_block.a
00:02:21.373    CC module/bdev/iscsi/bdev_iscsi.o
00:02:21.373    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:02:21.631    LIB libspdk_bdev_aio.a
00:02:21.631    CC module/bdev/nvme/bdev_mdns_client.o
00:02:21.631    CC module/bdev/nvme/vbdev_opal.o
00:02:21.631    CC module/bdev/virtio/bdev_virtio_scsi.o
00:02:21.631    CC module/bdev/virtio/bdev_virtio_blk.o
00:02:21.631    CC module/bdev/virtio/bdev_virtio_rpc.o
00:02:21.631    CC module/bdev/nvme/vbdev_opal_rpc.o
00:02:21.631    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:02:21.890    LIB libspdk_bdev_iscsi.a
00:02:21.890    LIB libspdk_bdev_raid.a
00:02:22.148    LIB libspdk_bdev_virtio.a
00:02:22.715    LIB libspdk_bdev_nvme.a
00:02:22.974    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:02:22.974    CC module/event/subsystems/sock/sock.o
00:02:22.974    CC module/event/subsystems/iobuf/iobuf.o
00:02:22.974    CC module/event/subsystems/scheduler/scheduler.o
00:02:22.974    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:02:22.974    CC module/event/subsystems/vmd/vmd_rpc.o
00:02:22.974    CC module/event/subsystems/vmd/vmd.o
00:02:23.233    LIB libspdk_event_sock.a
00:02:23.233    LIB libspdk_event_scheduler.a
00:02:23.233    LIB libspdk_event_vhost_blk.a
00:02:23.233    LIB libspdk_event_vmd.a
00:02:23.233    LIB libspdk_event_iobuf.a
00:02:23.491    CC module/event/subsystems/accel/accel.o
00:02:23.491    LIB libspdk_event_accel.a
00:02:23.750    CC module/event/subsystems/bdev/bdev.o
00:02:24.008    LIB libspdk_event_bdev.a
00:02:24.008    CC module/event/subsystems/scsi/scsi.o
00:02:24.008    CC module/event/subsystems/nbd/nbd.o
00:02:24.008    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:02:24.008    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:02:24.267    LIB libspdk_event_nbd.a
00:02:24.267    LIB libspdk_event_scsi.a
00:02:24.267    LIB libspdk_event_nvmf.a
00:02:24.526    CC module/event/subsystems/iscsi/iscsi.o
00:02:24.526    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:02:24.526    LIB libspdk_event_vhost_scsi.a
00:02:24.526    LIB libspdk_event_iscsi.a
00:02:24.785    CXX app/trace/trace.o
00:02:24.785    CC examples/nvme/hello_world/hello_world.o
00:02:24.785    CC examples/vmd/lsvmd/lsvmd.o
00:02:24.785    CC examples/sock/hello_world/hello_sock.o
00:02:24.785    CC examples/accel/perf/accel_perf.o
00:02:24.785    CC examples/ioat/perf/perf.o
00:02:24.785    CC examples/bdev/hello_world/hello_bdev.o
00:02:24.785    CC examples/blob/hello_world/hello_blob.o
00:02:24.785    CC test/accel/dif/dif.o
00:02:24.785    CC test/app/bdev_svc/bdev_svc.o
00:02:25.047    LINK lsvmd
00:02:25.047    LINK bdev_svc
00:02:25.047    LINK hello_world
00:02:25.047    LINK ioat_perf
00:02:25.047    LINK hello_bdev
00:02:25.047    LINK hello_blob
00:02:25.047    LINK hello_sock
00:02:25.329    LINK spdk_trace
00:02:25.329    LINK dif
00:02:25.329    LINK accel_perf
00:02:25.603    CC app/trace_record/trace_record.o
00:02:25.603    CC examples/ioat/verify/verify.o
00:02:25.861    LINK spdk_trace_record
00:02:25.861    CC app/nvmf_tgt/nvmf_main.o
00:02:25.861    LINK verify
00:02:26.119    CC examples/vmd/led/led.o
00:02:26.119    CC examples/nvme/reconnect/reconnect.o
00:02:26.119    LINK nvmf_tgt
00:02:26.119    LINK led
00:02:26.119    CC examples/nvme/nvme_manage/nvme_manage.o
00:02:26.119    CC examples/bdev/bdevperf/bdevperf.o
00:02:26.377    LINK reconnect
00:02:26.635    CC test/bdev/bdevio/bdevio.o
00:02:26.635    LINK nvme_manage
00:02:26.894    LINK bdevperf
00:02:26.894    LINK bdevio
00:02:27.152    CC test/blobfs/mkfs/mkfs.o
00:02:27.410    LINK mkfs
00:02:27.410    CC app/iscsi_tgt/iscsi_tgt.o
00:02:27.410    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:02:27.669    LINK iscsi_tgt
00:02:27.669    CC app/spdk_lspci/spdk_lspci.o
00:02:27.669    CC examples/blob/cli/blobcli.o
00:02:27.669    CC app/spdk_tgt/spdk_tgt.o
00:02:27.669    CC examples/nvme/arbitration/arbitration.o
00:02:27.669    LINK spdk_lspci
00:02:27.927    LINK spdk_tgt
00:02:27.927    LINK nvme_fuzz
00:02:28.185    LINK arbitration
00:02:28.185    LINK blobcli
00:02:28.752    TEST_HEADER include/spdk/accel.h
00:02:28.752    TEST_HEADER include/spdk/accel_module.h
00:02:28.752    TEST_HEADER include/spdk/assert.h
00:02:28.752    TEST_HEADER include/spdk/barrier.h
00:02:28.752    TEST_HEADER include/spdk/base64.h
00:02:28.752    TEST_HEADER include/spdk/bdev.h
00:02:28.752    TEST_HEADER include/spdk/bdev_module.h
00:02:28.752    TEST_HEADER include/spdk/bdev_zone.h
00:02:28.752    TEST_HEADER include/spdk/bit_array.h
00:02:28.752    TEST_HEADER include/spdk/bit_pool.h
00:02:28.752    TEST_HEADER include/spdk/blob.h
00:02:28.752    TEST_HEADER include/spdk/blob_bdev.h
00:02:28.752    TEST_HEADER include/spdk/blobfs.h
00:02:28.752    TEST_HEADER include/spdk/blobfs_bdev.h
00:02:28.752    TEST_HEADER include/spdk/conf.h
00:02:28.752    TEST_HEADER include/spdk/config.h
00:02:28.752    TEST_HEADER include/spdk/cpuset.h
00:02:28.752    TEST_HEADER include/spdk/crc16.h
00:02:28.752    TEST_HEADER include/spdk/crc32.h
00:02:28.752    TEST_HEADER include/spdk/crc64.h
00:02:28.752    TEST_HEADER include/spdk/dif.h
00:02:28.752    TEST_HEADER include/spdk/dma.h
00:02:28.752    TEST_HEADER include/spdk/endian.h
00:02:28.752    TEST_HEADER include/spdk/env.h
00:02:28.752    TEST_HEADER include/spdk/env_dpdk.h
00:02:28.752    TEST_HEADER include/spdk/event.h
00:02:28.752    TEST_HEADER include/spdk/fd.h
00:02:28.752    TEST_HEADER include/spdk/fd_group.h
00:02:28.752    TEST_HEADER include/spdk/file.h
00:02:28.752    TEST_HEADER include/spdk/ftl.h
00:02:28.752    TEST_HEADER include/spdk/gpt_spec.h
00:02:28.752    TEST_HEADER include/spdk/hexlify.h
00:02:28.752    TEST_HEADER include/spdk/histogram_data.h
00:02:28.752    TEST_HEADER include/spdk/idxd.h
00:02:28.752    TEST_HEADER include/spdk/idxd_spec.h
00:02:28.752    TEST_HEADER include/spdk/init.h
00:02:28.752    TEST_HEADER include/spdk/ioat.h
00:02:28.752    TEST_HEADER include/spdk/ioat_spec.h
00:02:28.752    TEST_HEADER include/spdk/iscsi_spec.h
00:02:28.752    TEST_HEADER include/spdk/json.h
00:02:28.752    TEST_HEADER include/spdk/jsonrpc.h
00:02:28.752    TEST_HEADER include/spdk/likely.h
00:02:28.752    TEST_HEADER include/spdk/log.h
00:02:28.752    TEST_HEADER include/spdk/lvol.h
00:02:28.752    TEST_HEADER include/spdk/memory.h
00:02:28.752    TEST_HEADER include/spdk/mmio.h
00:02:28.752    TEST_HEADER include/spdk/nbd.h
00:02:28.752    TEST_HEADER include/spdk/notify.h
00:02:28.752    TEST_HEADER include/spdk/nvme.h
00:02:28.752    TEST_HEADER include/spdk/nvme_intel.h
00:02:28.752    TEST_HEADER include/spdk/nvme_ocssd.h
00:02:28.752    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:02:28.752    TEST_HEADER include/spdk/nvme_spec.h
00:02:28.752    TEST_HEADER include/spdk/nvme_zns.h
00:02:28.752    TEST_HEADER include/spdk/nvmf.h
00:02:28.752    TEST_HEADER include/spdk/nvmf_cmd.h
00:02:28.752    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:02:28.752    TEST_HEADER include/spdk/nvmf_spec.h
00:02:28.752    TEST_HEADER include/spdk/nvmf_transport.h
00:02:29.010    TEST_HEADER include/spdk/opal.h
00:02:29.010    TEST_HEADER include/spdk/opal_spec.h
00:02:29.010    TEST_HEADER include/spdk/pci_ids.h
00:02:29.010    TEST_HEADER include/spdk/pipe.h
00:02:29.010    TEST_HEADER include/spdk/queue.h
00:02:29.010    TEST_HEADER include/spdk/reduce.h
00:02:29.010    TEST_HEADER include/spdk/rpc.h
00:02:29.010    TEST_HEADER include/spdk/scheduler.h
00:02:29.010    TEST_HEADER include/spdk/scsi.h
00:02:29.010    TEST_HEADER include/spdk/scsi_spec.h
00:02:29.010    TEST_HEADER include/spdk/sock.h
00:02:29.010    TEST_HEADER include/spdk/stdinc.h
00:02:29.010    TEST_HEADER include/spdk/string.h
00:02:29.010    TEST_HEADER include/spdk/thread.h
00:02:29.010    TEST_HEADER include/spdk/trace.h
00:02:29.010    TEST_HEADER include/spdk/trace_parser.h
00:02:29.010    TEST_HEADER include/spdk/tree.h
00:02:29.010    TEST_HEADER include/spdk/ublk.h
00:02:29.010    TEST_HEADER include/spdk/util.h
00:02:29.010    TEST_HEADER include/spdk/uuid.h
00:02:29.010    TEST_HEADER include/spdk/version.h
00:02:29.010    TEST_HEADER include/spdk/vfio_user_pci.h
00:02:29.010    TEST_HEADER include/spdk/vfio_user_spec.h
00:02:29.010    TEST_HEADER include/spdk/vhost.h
00:02:29.010    TEST_HEADER include/spdk/vmd.h
00:02:29.010    TEST_HEADER include/spdk/xor.h
00:02:29.010    TEST_HEADER include/spdk/zipf.h
00:02:29.010    CXX test/cpp_headers/accel.o
00:02:29.010    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:02:29.010    CC examples/nvme/hotplug/hotplug.o
00:02:29.010    CXX test/cpp_headers/accel_module.o
00:02:29.268    CXX test/cpp_headers/assert.o
00:02:29.268    LINK hotplug
00:02:29.526    CXX test/cpp_headers/barrier.o
00:02:29.526    CXX test/cpp_headers/base64.o
00:02:29.784    CC examples/nvme/cmb_copy/cmb_copy.o
00:02:29.784    CC test/dma/test_dma/test_dma.o
00:02:29.784    CXX test/cpp_headers/bdev.o
00:02:29.784    LINK cmb_copy
00:02:29.784    CC test/env/mem_callbacks/mem_callbacks.o
00:02:29.784    CC app/spdk_nvme_perf/perf.o
00:02:30.043    CXX test/cpp_headers/bdev_module.o
00:02:30.043    LINK test_dma
00:02:30.043    CXX test/cpp_headers/bdev_zone.o
00:02:30.301    CXX test/cpp_headers/bit_array.o
00:02:30.301    LINK mem_callbacks
00:02:30.301    CC app/spdk_nvme_identify/identify.o
00:02:30.559    CXX test/cpp_headers/bit_pool.o
00:02:30.818    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:02:30.818    CC test/env/vtophys/vtophys.o
00:02:30.818    LINK spdk_nvme_perf
00:02:30.818    CXX test/cpp_headers/blob.o
00:02:30.818    LINK iscsi_fuzz
00:02:30.818    CC examples/nvme/abort/abort.o
00:02:30.818    LINK vtophys
00:02:30.818    LINK env_dpdk_post_init
00:02:31.076    CC test/event/event_perf/event_perf.o
00:02:31.076    CXX test/cpp_headers/blob_bdev.o
00:02:31.076    CC test/event/reactor/reactor.o
00:02:31.334    LINK spdk_nvme_identify
00:02:31.334    CXX test/cpp_headers/blobfs.o
00:02:31.334    LINK event_perf
00:02:31.334    LINK reactor
00:02:31.334    LINK abort
00:02:31.593    CXX test/cpp_headers/blobfs_bdev.o
00:02:31.593    CC test/event/reactor_perf/reactor_perf.o
00:02:31.593    LINK reactor_perf
00:02:31.593    CXX test/cpp_headers/conf.o
00:02:31.852    CXX test/cpp_headers/config.o
00:02:31.852    CC test/event/app_repeat/app_repeat.o
00:02:31.852    CXX test/cpp_headers/cpuset.o
00:02:31.852    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:02:31.852    CC test/env/memory/memory_ut.o
00:02:31.852    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:02:32.110    LINK app_repeat
00:02:32.110    CC app/spdk_nvme_discover/discovery_aer.o
00:02:32.110    CXX test/cpp_headers/crc16.o
00:02:32.110    CC app/spdk_top/spdk_top.o
00:02:32.368    CXX test/cpp_headers/crc32.o
00:02:32.368    LINK spdk_nvme_discover
00:02:32.368    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:02:32.368    CC app/vhost/vhost.o
00:02:32.368    LINK vhost_fuzz
00:02:32.368    CC test/lvol/esnap/esnap.o
00:02:32.627    LINK pmr_persistence
00:02:32.627    CXX test/cpp_headers/crc64.o
00:02:32.627    LINK vhost
00:02:32.886    CC app/spdk_dd/spdk_dd.o
00:02:32.886    LINK memory_ut
00:02:33.144    CXX test/cpp_headers/dif.o
00:02:33.144    CC test/event/scheduler/scheduler.o
00:02:33.144    CC test/env/pci/pci_ut.o
00:02:33.144    LINK spdk_dd
00:02:33.144    CXX test/cpp_headers/dma.o
00:02:33.144    LINK spdk_top
00:02:33.144    CC test/app/histogram_perf/histogram_perf.o
00:02:33.403    LINK scheduler
00:02:33.403    CXX test/cpp_headers/endian.o
00:02:33.403    CC test/app/jsoncat/jsoncat.o
00:02:33.403    LINK histogram_perf
00:02:33.403    CXX test/cpp_headers/env.o
00:02:33.661    LINK jsoncat
00:02:33.661    LINK pci_ut
00:02:33.661    CC examples/nvmf/nvmf/nvmf.o
00:02:33.661    CXX test/cpp_headers/env_dpdk.o
00:02:33.920    CC examples/util/zipf/zipf.o
00:02:33.920    CXX test/cpp_headers/event.o
00:02:33.920    LINK nvmf
00:02:33.920    LINK zipf
00:02:34.179    CXX test/cpp_headers/fd.o
00:02:34.179    CC test/nvme/aer/aer.o
00:02:34.179    CC test/nvme/reset/reset.o
00:02:34.179    CC test/app/stub/stub.o
00:02:34.179    CXX test/cpp_headers/fd_group.o
00:02:34.437    LINK reset
00:02:34.437    LINK stub
00:02:34.437    CXX test/cpp_headers/file.o
00:02:34.437    LINK aer
00:02:34.437    CC test/nvme/sgl/sgl.o
00:02:34.437    CXX test/cpp_headers/ftl.o
00:02:34.695    CXX test/cpp_headers/gpt_spec.o
00:02:34.695    LINK sgl
00:02:34.695    CXX test/cpp_headers/hexlify.o
00:02:34.953    CXX test/cpp_headers/histogram_data.o
00:02:35.212    CXX test/cpp_headers/idxd.o
00:02:35.212    CXX test/cpp_headers/idxd_spec.o
00:02:35.212    CXX test/cpp_headers/init.o
00:02:35.470    CXX test/cpp_headers/ioat.o
00:02:35.470    CXX test/cpp_headers/ioat_spec.o
00:02:35.470    CXX test/cpp_headers/iscsi_spec.o
00:02:35.470    CC test/nvme/e2edp/nvme_dp.o
00:02:35.470    CC test/nvme/overhead/overhead.o
00:02:35.729    CC test/nvme/err_injection/err_injection.o
00:02:35.729    CC test/nvme/startup/startup.o
00:02:35.729    CXX test/cpp_headers/json.o
00:02:35.729    CXX test/cpp_headers/jsonrpc.o
00:02:35.729    LINK err_injection
00:02:35.729    LINK nvme_dp
00:02:35.729    LINK startup
00:02:35.987    LINK overhead
00:02:35.987    CXX test/cpp_headers/likely.o
00:02:35.987    CC test/rpc_client/rpc_client_test.o
00:02:35.987    CXX test/cpp_headers/log.o
00:02:36.246    CC test/thread/poller_perf/poller_perf.o
00:02:36.246    LINK rpc_client_test
00:02:36.246    CC app/fio/nvme/fio_plugin.o
00:02:36.246    CXX test/cpp_headers/lvol.o
00:02:36.246    LINK poller_perf
00:02:36.504    CXX test/cpp_headers/memory.o
00:02:36.504    CXX test/cpp_headers/mmio.o
00:02:36.763    CXX test/cpp_headers/nbd.o
00:02:36.763    CXX test/cpp_headers/notify.o
00:02:36.763    CC app/fio/bdev/fio_plugin.o
00:02:36.763    LINK spdk_nvme
00:02:36.763    CXX test/cpp_headers/nvme.o
00:02:37.021    CXX test/cpp_headers/nvme_intel.o
00:02:37.021    CC test/nvme/reserve/reserve.o
00:02:37.021    CC examples/idxd/perf/perf.o
00:02:37.021    CC examples/interrupt_tgt/interrupt_tgt.o
00:02:37.021    CC examples/thread/thread/thread_ex.o
00:02:37.021    CC test/nvme/simple_copy/simple_copy.o
00:02:37.021    CC test/thread/lock/spdk_lock.o
00:02:37.021    CXX test/cpp_headers/nvme_ocssd.o
00:02:37.280    LINK reserve
00:02:37.280    LINK interrupt_tgt
00:02:37.280    LINK thread
00:02:37.280    CXX test/cpp_headers/nvme_ocssd_spec.o
00:02:37.280    LINK spdk_bdev
00:02:37.280    LINK simple_copy
00:02:37.280    LINK idxd_perf
00:02:37.539    CXX test/cpp_headers/nvme_spec.o
00:02:37.539    CXX test/cpp_headers/nvme_zns.o
00:02:37.798    CXX test/cpp_headers/nvmf.o
00:02:37.798    CC test/nvme/connect_stress/connect_stress.o
00:02:37.798    LINK esnap
00:02:37.798    CXX test/cpp_headers/nvmf_cmd.o
00:02:38.057    CXX test/cpp_headers/nvmf_fc_spec.o
00:02:38.057    LINK connect_stress
00:02:38.315    CXX test/cpp_headers/nvmf_spec.o
00:02:38.315    CC test/nvme/boot_partition/boot_partition.o
00:02:38.315    CXX test/cpp_headers/nvmf_transport.o
00:02:38.315    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:02:38.315    LINK boot_partition
00:02:38.574    CC test/nvme/compliance/nvme_compliance.o
00:02:38.574    CC test/nvme/fused_ordering/fused_ordering.o
00:02:38.574    CXX test/cpp_headers/opal.o
00:02:38.832    LINK histogram_ut
00:02:38.832    LINK fused_ordering
00:02:38.832    CXX test/cpp_headers/opal_spec.o
00:02:38.832    LINK spdk_lock
00:02:39.091    LINK nvme_compliance
00:02:39.091    CXX test/cpp_headers/pci_ids.o
00:02:39.091    CC test/unit/lib/accel/accel.c/accel_ut.o
00:02:39.091    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:02:39.091    CXX test/cpp_headers/pipe.o
00:02:39.091    CC test/unit/lib/bdev/part.c/part_ut.o
00:02:39.350    CXX test/cpp_headers/queue.o
00:02:39.350    CXX test/cpp_headers/reduce.o
00:02:39.608    CXX test/cpp_headers/rpc.o
00:02:39.608    CXX test/cpp_headers/scheduler.o
00:02:39.608    CXX test/cpp_headers/scsi.o
00:02:39.867    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:02:39.867    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:02:39.867    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:02:39.867    CXX test/cpp_headers/scsi_spec.o
00:02:39.867    CXX test/cpp_headers/sock.o
00:02:40.126    LINK tree_ut
00:02:40.126    CC test/nvme/doorbell_aers/doorbell_aers.o
00:02:40.126    CC test/nvme/fdp/fdp.o
00:02:40.126    CXX test/cpp_headers/stdinc.o
00:02:40.126    CXX test/cpp_headers/string.o
00:02:40.126    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:02:40.126    LINK doorbell_aers
00:02:40.384    CXX test/cpp_headers/thread.o
00:02:40.384    CC test/nvme/cuse/cuse.o
00:02:40.384    LINK fdp
00:02:40.384    LINK blob_bdev_ut
00:02:40.384    CXX test/cpp_headers/trace.o
00:02:40.643    CXX test/cpp_headers/trace_parser.o
00:02:40.643    CC test/unit/lib/blob/blob.c/blob_ut.o
00:02:40.643    CXX test/cpp_headers/tree.o
00:02:40.901    CXX test/cpp_headers/ublk.o
00:02:40.901    CXX test/cpp_headers/util.o
00:02:41.159    LINK blobfs_async_ut
00:02:41.159    CXX test/cpp_headers/uuid.o
00:02:41.159    LINK cuse
00:02:41.159    CXX test/cpp_headers/version.o
00:02:41.418    CXX test/cpp_headers/vfio_user_pci.o
00:02:41.418    CC test/unit/lib/dma/dma.c/dma_ut.o
00:02:41.418    CXX test/cpp_headers/vfio_user_spec.o
00:02:41.418    LINK accel_ut
00:02:41.418    LINK blobfs_sync_ut
00:02:41.418    CXX test/cpp_headers/vhost.o
00:02:41.677    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:02:41.677    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:02:41.677    CXX test/cpp_headers/vmd.o
00:02:41.677    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:02:41.677    LINK dma_ut
00:02:41.677    CXX test/cpp_headers/xor.o
00:02:41.936    CXX test/cpp_headers/zipf.o
00:02:41.936    LINK scsi_nvme_ut
00:02:41.936    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:02:41.936    CC test/unit/lib/event/app.c/app_ut.o
00:02:41.936    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:02:41.936    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:02:42.194    LINK gpt_ut
00:02:42.194    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:02:42.194    LINK blobfs_bdev_ut
00:02:42.194    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:02:42.453    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:02:42.453    LINK ioat_ut
00:02:42.712    LINK app_ut
00:02:42.712    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:02:42.712    LINK vbdev_lvol_ut
00:02:42.712    LINK bdev_raid_sb_ut
00:02:42.712    LINK part_ut
00:02:42.970    LINK reactor_ut
00:02:42.970    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:02:42.970    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:02:42.970    CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o
00:02:43.228    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:02:43.228    LINK bdev_zone_ut
00:02:43.228    LINK concat_ut
00:02:43.228    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:02:43.487    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:02:43.487    LINK raid1_ut
00:02:43.487    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:02:43.746    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:02:44.005    LINK vbdev_zone_block_ut
00:02:44.005    LINK init_grp_ut
00:02:44.264    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:02:44.264    LINK raid5f_ut
00:02:44.264    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:02:44.523    LINK bdev_raid_ut
00:02:44.523    LINK bdev_ut
00:02:44.523    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:02:44.523    LINK conn_ut
00:02:44.782    LINK json_util_ut
00:02:44.782    CC test/unit/lib/iscsi/param.c/param_ut.o
00:02:44.782    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:02:44.782    LINK json_write_ut
00:02:44.782    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:02:45.040    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:02:45.040    CC test/unit/lib/log/log.c/log_ut.o
00:02:45.299    LINK param_ut
00:02:45.299    LINK portal_grp_ut
00:02:45.299    LINK jsonrpc_server_ut
00:02:45.558    LINK log_ut
00:02:45.558    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:02:45.558    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:02:45.558    CC test/unit/lib/notify/notify.c/notify_ut.o
00:02:45.558    LINK tgt_node_ut
00:02:45.817    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:02:45.817    LINK notify_ut
00:02:46.076    LINK bdev_ut
00:02:46.076    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:02:46.076    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:02:46.335    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:02:46.594    LINK json_parse_ut
00:02:46.594    LINK dev_ut
00:02:46.594    LINK scsi_ut
00:02:46.595    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:02:46.595    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:02:46.854    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:02:46.854    LINK lun_ut
00:02:46.854    LINK nvme_ut
00:02:47.113    LINK iscsi_ut
00:02:47.113    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:02:47.113    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:02:47.373    LINK lvol_ut
00:02:47.373    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:02:47.632    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:02:47.632    LINK scsi_bdev_ut
00:02:47.890    LINK bdev_nvme_ut
00:02:47.890    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:02:48.149    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:02:48.409    LINK ctrlr_bdev_ut
00:02:48.409    LINK scsi_pr_ut
00:02:48.668    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:02:48.668    CC test/unit/lib/sock/sock.c/sock_ut.o
00:02:48.668    LINK blob_ut
00:02:48.927    LINK nvmf_ut
00:02:48.927    LINK subsystem_ut
00:02:48.927    LINK ctrlr_discovery_ut
00:02:48.927    CC test/unit/lib/sock/posix.c/posix_ut.o
00:02:49.186    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:02:49.445    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:02:49.445    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:02:49.445    LINK tcp_ut
00:02:49.704    LINK ctrlr_ut
00:02:49.962    LINK nvme_ctrlr_cmd_ut
00:02:49.963    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:02:49.963    CC test/unit/lib/thread/thread.c/thread_ut.o
00:02:49.963    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:02:50.221    LINK posix_ut
00:02:50.221    LINK sock_ut
00:02:50.221    LINK nvme_ns_ut
00:02:50.221    LINK nvme_ctrlr_ocssd_cmd_ut
00:02:50.480    CC test/unit/lib/util/base64.c/base64_ut.o
00:02:50.480    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:02:50.480    LINK nvme_ctrlr_ut
00:02:50.480    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:02:50.480    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:02:50.739    LINK base64_ut
00:02:50.739    LINK pci_event_ut
00:02:50.739    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:02:50.739    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:02:50.998    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:02:51.256    LINK bit_array_ut
00:02:51.515    LINK subsystem_ut
00:02:51.515    LINK nvme_poll_group_ut
00:02:51.515    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:02:51.515    LINK nvme_ns_ocssd_cmd_ut
00:02:51.515    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:02:51.773    LINK rdma_ut
00:02:51.773    LINK nvme_ns_cmd_ut
00:02:51.773    LINK cpuset_ut
00:02:51.773    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:02:52.031    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:02:52.031    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:02:52.031    LINK nvme_qpair_ut
00:02:52.031    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:02:52.031    LINK nvme_pcie_ut
00:02:52.031    LINK nvme_quirks_ut
00:02:52.031    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:02:52.032    LINK crc16_ut
00:02:52.032    LINK thread_ut
00:02:52.290    LINK crc32_ieee_ut
00:02:52.290    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:02:52.290    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:02:52.290    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:02:52.290    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:02:52.570    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:02:52.570    LINK crc32c_ut
00:02:52.570    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:02:52.570    LINK transport_ut
00:02:52.570    LINK crc64_ut
00:02:52.570    CC test/unit/lib/util/dif.c/dif_ut.o
00:02:52.833    LINK nvme_io_msg_ut
00:02:52.833    CC test/unit/lib/util/iov.c/iov_ut.o
00:02:52.833    LINK nvme_transport_ut
00:02:52.833    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:02:53.092    LINK nvme_opal_ut
00:02:53.092    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:02:53.092    LINK iov_ut
00:02:53.092    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:02:53.092    LINK nvme_fabric_ut
00:02:53.092    LINK iobuf_ut
00:02:53.351    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:02:53.351    CC test/unit/lib/util/math.c/math_ut.o
00:02:53.351    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:02:53.351    LINK rpc_ut
00:02:53.351    LINK math_ut
00:02:53.609    CC test/unit/lib/rdma/common.c/common_ut.o
00:02:53.609    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:02:53.609    LINK nvme_pcie_common_ut
00:02:53.609    LINK idxd_user_ut
00:02:53.868    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:02:53.868    LINK dif_ut
00:02:53.868    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:02:53.868    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:02:54.127    LINK common_ut
00:02:54.127    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:02:54.127    LINK ftl_l2p_ut
00:02:54.127    LINK pipe_ut
00:02:54.127    LINK nvme_tcp_ut
00:02:54.386    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:02:54.386    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:02:54.386    CC test/unit/lib/util/string.c/string_ut.o
00:02:54.386    LINK nvme_cuse_ut
00:02:54.386    CC test/unit/lib/util/xor.c/xor_ut.o
00:02:54.644    LINK ftl_bitmap_ut
00:02:54.644    LINK ftl_io_ut
00:02:54.644    LINK ftl_mempool_ut
00:02:54.644    LINK string_ut
00:02:54.644    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:02:54.644    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:02:54.902    LINK xor_ut
00:02:54.902    LINK idxd_ut
00:02:54.902    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:02:55.161    LINK nvme_rdma_ut
00:02:55.161    LINK ftl_mngt_ut
00:02:55.419    LINK vhost_ut
00:02:55.419    LINK ftl_band_ut
00:02:56.355    LINK ftl_layout_upgrade_ut
00:02:56.355    LINK ftl_sb_ut
00:02:56.355  
00:02:56.355  real	1m47.377s
00:02:56.355  user	9m16.267s
00:02:56.355  sys	1m54.424s
00:02:56.355   23:36:26	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:02:56.355  ************************************
00:02:56.355  END TEST unittest_build
00:02:56.355  ************************************
00:02:56.355   23:36:26	-- common/autotest_common.sh@10 -- $ set +x
00:02:56.355    23:36:27	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:02:56.355     23:36:27	-- common/autotest_common.sh@1690 -- # lcov --version
00:02:56.355     23:36:27	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:02:56.615    23:36:27	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:02:56.615    23:36:27	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:02:56.615    23:36:27	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:02:56.615    23:36:27	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:02:56.615    23:36:27	-- scripts/common.sh@335 -- # IFS=.-:
00:02:56.615    23:36:27	-- scripts/common.sh@335 -- # read -ra ver1
00:02:56.615    23:36:27	-- scripts/common.sh@336 -- # IFS=.-:
00:02:56.615    23:36:27	-- scripts/common.sh@336 -- # read -ra ver2
00:02:56.615    23:36:27	-- scripts/common.sh@337 -- # local 'op=<'
00:02:56.615    23:36:27	-- scripts/common.sh@339 -- # ver1_l=2
00:02:56.615    23:36:27	-- scripts/common.sh@340 -- # ver2_l=1
00:02:56.615    23:36:27	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:02:56.615    23:36:27	-- scripts/common.sh@343 -- # case "$op" in
00:02:56.615    23:36:27	-- scripts/common.sh@344 -- # : 1
00:02:56.615    23:36:27	-- scripts/common.sh@363 -- # (( v = 0 ))
00:02:56.615    23:36:27	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:56.615     23:36:27	-- scripts/common.sh@364 -- # decimal 1
00:02:56.615     23:36:27	-- scripts/common.sh@352 -- # local d=1
00:02:56.615     23:36:27	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:02:56.615     23:36:27	-- scripts/common.sh@354 -- # echo 1
00:02:56.615    23:36:27	-- scripts/common.sh@364 -- # ver1[v]=1
00:02:56.615     23:36:27	-- scripts/common.sh@365 -- # decimal 2
00:02:56.615     23:36:27	-- scripts/common.sh@352 -- # local d=2
00:02:56.615     23:36:27	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:02:56.615     23:36:27	-- scripts/common.sh@354 -- # echo 2
00:02:56.615    23:36:27	-- scripts/common.sh@365 -- # ver2[v]=2
00:02:56.615    23:36:27	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:02:56.615    23:36:27	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:02:56.615    23:36:27	-- scripts/common.sh@367 -- # return 0
00:02:56.615    23:36:27	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:02:56.615    23:36:27	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:02:56.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:56.615  		--rc genhtml_branch_coverage=1
00:02:56.615  		--rc genhtml_function_coverage=1
00:02:56.615  		--rc genhtml_legend=1
00:02:56.615  		--rc geninfo_all_blocks=1
00:02:56.615  		--rc geninfo_unexecuted_blocks=1
00:02:56.615  		
00:02:56.615  		'
00:02:56.615    23:36:27	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:02:56.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:56.615  		--rc genhtml_branch_coverage=1
00:02:56.615  		--rc genhtml_function_coverage=1
00:02:56.615  		--rc genhtml_legend=1
00:02:56.615  		--rc geninfo_all_blocks=1
00:02:56.615  		--rc geninfo_unexecuted_blocks=1
00:02:56.615  		
00:02:56.615  		'
00:02:56.615    23:36:27	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:02:56.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:56.615  		--rc genhtml_branch_coverage=1
00:02:56.615  		--rc genhtml_function_coverage=1
00:02:56.615  		--rc genhtml_legend=1
00:02:56.615  		--rc geninfo_all_blocks=1
00:02:56.615  		--rc geninfo_unexecuted_blocks=1
00:02:56.615  		
00:02:56.615  		'
00:02:56.615    23:36:27	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:02:56.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:56.615  		--rc genhtml_branch_coverage=1
00:02:56.615  		--rc genhtml_function_coverage=1
00:02:56.615  		--rc genhtml_legend=1
00:02:56.615  		--rc geninfo_all_blocks=1
00:02:56.615  		--rc geninfo_unexecuted_blocks=1
00:02:56.615  		
00:02:56.615  		'
00:02:56.615   23:36:27	-- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:02:56.615     23:36:27	-- nvmf/common.sh@7 -- # uname -s
00:02:56.615    23:36:27	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:02:56.615    23:36:27	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:02:56.615    23:36:27	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:02:56.615    23:36:27	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:02:56.615    23:36:27	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:02:56.615    23:36:27	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:02:56.615    23:36:27	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:02:56.615    23:36:27	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:02:56.615    23:36:27	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:02:56.615     23:36:27	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:02:56.615    23:36:27	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1802c14-b01e-4423-a721-9106f49d7f16
00:02:56.615    23:36:27	-- nvmf/common.sh@18 -- # NVME_HOSTID=b1802c14-b01e-4423-a721-9106f49d7f16
00:02:56.615    23:36:27	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:02:56.615    23:36:27	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:02:56.615    23:36:27	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:02:56.615    23:36:27	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:56.615     23:36:27	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:02:56.615     23:36:27	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:56.615     23:36:27	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:56.615      23:36:27	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:56.615      23:36:27	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:56.615      23:36:27	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:56.615      23:36:27	-- paths/export.sh@5 -- # export PATH
00:02:56.615      23:36:27	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:56.615    23:36:27	-- nvmf/common.sh@46 -- # : 0
00:02:56.615    23:36:27	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:02:56.615    23:36:27	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:02:56.615    23:36:27	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:02:56.615    23:36:27	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:02:56.615    23:36:27	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:02:56.615    23:36:27	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:02:56.615    23:36:27	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:02:56.615    23:36:27	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:02:56.616   23:36:27	-- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:02:56.616    23:36:27	-- spdk/autotest.sh@32 -- # uname -s
00:02:56.616   23:36:27	-- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:02:56.616   23:36:27	-- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E'
00:02:56.616   23:36:27	-- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:02:56.616   23:36:27	-- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:02:56.616   23:36:27	-- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:02:56.616   23:36:27	-- spdk/autotest.sh@44 -- # modprobe nbd
00:02:56.616    23:36:27	-- spdk/autotest.sh@46 -- # type -P udevadm
00:02:56.616   23:36:27	-- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm
00:02:56.616   23:36:27	-- spdk/autotest.sh@48 -- # udevadm_pid=92505
00:02:56.616   23:36:27	-- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power
00:02:56.616   23:36:27	-- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property
00:02:56.616   23:36:27	-- spdk/autotest.sh@54 -- # echo 92530
00:02:56.616   23:36:27	-- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power
00:02:56.616   23:36:27	-- spdk/autotest.sh@56 -- # echo 92531
00:02:56.616   23:36:27	-- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power
00:02:56.616   23:36:27	-- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]]
00:02:56.616   23:36:27	-- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:02:56.616   23:36:27	-- spdk/autotest.sh@68 -- # timing_enter autotest
00:02:56.616   23:36:27	-- common/autotest_common.sh@722 -- # xtrace_disable
00:02:56.616   23:36:27	-- common/autotest_common.sh@10 -- # set +x
00:02:56.616   23:36:27	-- spdk/autotest.sh@70 -- # create_test_list
00:02:56.616   23:36:27	-- common/autotest_common.sh@746 -- # xtrace_disable
00:02:56.616   23:36:27	-- common/autotest_common.sh@10 -- # set +x
00:02:56.616     23:36:27	-- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:02:56.616    23:36:27	-- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:02:56.616   23:36:27	-- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk
00:02:56.616   23:36:27	-- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:02:56.616   23:36:27	-- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk
00:02:56.616   23:36:27	-- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod
00:02:56.616    23:36:27	-- common/autotest_common.sh@1450 -- # uname
00:02:56.616   23:36:27	-- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']'
00:02:56.616   23:36:27	-- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf
00:02:56.616    23:36:27	-- common/autotest_common.sh@1470 -- # uname
00:02:56.616   23:36:27	-- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]]
00:02:56.616   23:36:27	-- spdk/autotest.sh@79 -- # [[ y == y ]]
00:02:56.616   23:36:27	-- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:02:56.874  lcov: LCOV version 1.15
00:02:56.875   23:36:27	-- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:03:14.962  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:03:14.962  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:03:14.962  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:03:14.962  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:03:14.962  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:03:14.962  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:03:41.509   23:37:09	-- spdk/autotest.sh@87 -- # timing_enter pre_cleanup
00:03:41.509   23:37:09	-- common/autotest_common.sh@722 -- # xtrace_disable
00:03:41.509   23:37:09	-- common/autotest_common.sh@10 -- # set +x
00:03:41.509   23:37:09	-- spdk/autotest.sh@89 -- # rm -f
00:03:41.509   23:37:09	-- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:41.509  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:41.509  0000:00:06.0 (1b36 0010): Already using the nvme driver
00:03:41.509   23:37:09	-- spdk/autotest.sh@94 -- # get_zoned_devs
00:03:41.509   23:37:09	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:03:41.509   23:37:09	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:03:41.509   23:37:09	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:03:41.509   23:37:09	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:03:41.509   23:37:09	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:03:41.510   23:37:09	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:03:41.510   23:37:09	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:41.510   23:37:09	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:03:41.510   23:37:09	-- spdk/autotest.sh@96 -- # (( 0 > 0 ))
00:03:41.510    23:37:09	-- spdk/autotest.sh@108 -- # ls /dev/nvme0n1
00:03:41.510    23:37:09	-- spdk/autotest.sh@108 -- # grep -v p
00:03:41.510   23:37:09	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:03:41.510   23:37:09	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:03:41.510   23:37:09	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1
00:03:41.510   23:37:09	-- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt
00:03:41.510   23:37:09	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:03:41.510  No valid GPT data, bailing
00:03:41.510    23:37:09	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:41.510   23:37:09	-- scripts/common.sh@393 -- # pt=
00:03:41.510   23:37:09	-- scripts/common.sh@394 -- # return 1
00:03:41.510   23:37:09	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:03:41.510  1+0 records in
00:03:41.510  1+0 records out
00:03:41.510  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530217 s, 198 MB/s
00:03:41.510   23:37:09	-- spdk/autotest.sh@116 -- # sync
00:03:41.510   23:37:09	-- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:41.510   23:37:09	-- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:41.510    23:37:09	-- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:41.510    23:37:11	-- spdk/autotest.sh@122 -- # uname -s
00:03:41.510   23:37:11	-- spdk/autotest.sh@122 -- # '[' Linux = Linux ']'
00:03:41.510   23:37:11	-- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:03:41.510   23:37:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:41.510   23:37:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:41.510   23:37:11	-- common/autotest_common.sh@10 -- # set +x
00:03:41.510  ************************************
00:03:41.510  START TEST setup.sh
00:03:41.510  ************************************
00:03:41.510   23:37:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:03:41.510  * Looking for test storage...
00:03:41.510  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:03:41.510     23:37:11	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:41.510      23:37:11	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:41.510      23:37:11	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:41.510     23:37:11	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:41.510     23:37:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:41.510     23:37:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:41.510     23:37:11	-- scripts/common.sh@335 -- # IFS=.-:
00:03:41.510     23:37:11	-- scripts/common.sh@335 -- # read -ra ver1
00:03:41.510     23:37:11	-- scripts/common.sh@336 -- # IFS=.-:
00:03:41.510     23:37:11	-- scripts/common.sh@336 -- # read -ra ver2
00:03:41.510     23:37:11	-- scripts/common.sh@337 -- # local 'op=<'
00:03:41.510     23:37:11	-- scripts/common.sh@339 -- # ver1_l=2
00:03:41.510     23:37:11	-- scripts/common.sh@340 -- # ver2_l=1
00:03:41.510     23:37:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:41.510     23:37:11	-- scripts/common.sh@343 -- # case "$op" in
00:03:41.510     23:37:11	-- scripts/common.sh@344 -- # : 1
00:03:41.510     23:37:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:41.510     23:37:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:41.510      23:37:11	-- scripts/common.sh@364 -- # decimal 1
00:03:41.510      23:37:11	-- scripts/common.sh@352 -- # local d=1
00:03:41.510      23:37:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:41.510      23:37:11	-- scripts/common.sh@354 -- # echo 1
00:03:41.510     23:37:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:41.510      23:37:11	-- scripts/common.sh@365 -- # decimal 2
00:03:41.510      23:37:11	-- scripts/common.sh@352 -- # local d=2
00:03:41.510      23:37:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:41.510      23:37:11	-- scripts/common.sh@354 -- # echo 2
00:03:41.510     23:37:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:41.510     23:37:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:41.510     23:37:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:41.510     23:37:11	-- scripts/common.sh@367 -- # return 0
00:03:41.510     23:37:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510    23:37:11	-- setup/test-setup.sh@10 -- # uname -s
00:03:41.510   23:37:11	-- setup/test-setup.sh@10 -- # [[ Linux == Linux ]]
00:03:41.510   23:37:11	-- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:03:41.510   23:37:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:41.510   23:37:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:41.510   23:37:11	-- common/autotest_common.sh@10 -- # set +x
00:03:41.510  ************************************
00:03:41.510  START TEST acl
00:03:41.510  ************************************
00:03:41.510   23:37:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:03:41.510  * Looking for test storage...
00:03:41.510  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:03:41.510     23:37:11	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:41.510      23:37:11	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:41.510      23:37:11	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:41.510     23:37:11	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:41.510     23:37:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:41.510     23:37:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:41.510     23:37:11	-- scripts/common.sh@335 -- # IFS=.-:
00:03:41.510     23:37:11	-- scripts/common.sh@335 -- # read -ra ver1
00:03:41.510     23:37:11	-- scripts/common.sh@336 -- # IFS=.-:
00:03:41.510     23:37:11	-- scripts/common.sh@336 -- # read -ra ver2
00:03:41.510     23:37:11	-- scripts/common.sh@337 -- # local 'op=<'
00:03:41.510     23:37:11	-- scripts/common.sh@339 -- # ver1_l=2
00:03:41.510     23:37:11	-- scripts/common.sh@340 -- # ver2_l=1
00:03:41.510     23:37:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:41.510     23:37:11	-- scripts/common.sh@343 -- # case "$op" in
00:03:41.510     23:37:11	-- scripts/common.sh@344 -- # : 1
00:03:41.510     23:37:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:41.510     23:37:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:41.510      23:37:11	-- scripts/common.sh@364 -- # decimal 1
00:03:41.510      23:37:11	-- scripts/common.sh@352 -- # local d=1
00:03:41.510      23:37:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:41.510      23:37:11	-- scripts/common.sh@354 -- # echo 1
00:03:41.510     23:37:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:41.510      23:37:11	-- scripts/common.sh@365 -- # decimal 2
00:03:41.510      23:37:11	-- scripts/common.sh@352 -- # local d=2
00:03:41.510      23:37:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:41.510      23:37:11	-- scripts/common.sh@354 -- # echo 2
00:03:41.510     23:37:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:41.510     23:37:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:41.510     23:37:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:41.510     23:37:11	-- scripts/common.sh@367 -- # return 0
00:03:41.510     23:37:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510     23:37:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:41.510  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:41.510  		--rc genhtml_branch_coverage=1
00:03:41.510  		--rc genhtml_function_coverage=1
00:03:41.510  		--rc genhtml_legend=1
00:03:41.510  		--rc geninfo_all_blocks=1
00:03:41.510  		--rc geninfo_unexecuted_blocks=1
00:03:41.510  		
00:03:41.510  		'
00:03:41.510   23:37:11	-- setup/acl.sh@10 -- # get_zoned_devs
00:03:41.511   23:37:11	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:03:41.511   23:37:11	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:03:41.511   23:37:11	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:03:41.511   23:37:11	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:03:41.511   23:37:11	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:03:41.511   23:37:11	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:03:41.511   23:37:11	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:41.511   23:37:11	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:03:41.511   23:37:11	-- setup/acl.sh@12 -- # devs=()
00:03:41.511   23:37:11	-- setup/acl.sh@12 -- # declare -a devs
00:03:41.511   23:37:11	-- setup/acl.sh@13 -- # drivers=()
00:03:41.511   23:37:11	-- setup/acl.sh@13 -- # declare -A drivers
00:03:41.511   23:37:11	-- setup/acl.sh@51 -- # setup reset
00:03:41.511   23:37:11	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:41.511   23:37:11	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:41.511   23:37:11	-- setup/acl.sh@52 -- # collect_setup_devs
00:03:41.511   23:37:11	-- setup/acl.sh@16 -- # local dev driver
00:03:41.511   23:37:11	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:41.511    23:37:11	-- setup/acl.sh@15 -- # setup output status
00:03:41.511    23:37:11	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:41.511    23:37:11	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:41.511  Hugepages
00:03:41.511  node     hugesize     free /  total
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # continue
00:03:41.511   23:37:12	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:41.511  
00:03:41.511  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # continue
00:03:41.511   23:37:12	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]]
00:03:41.511   23:37:12	-- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]]
00:03:41.511   23:37:12	-- setup/acl.sh@20 -- # continue
00:03:41.511   23:37:12	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:41.511   23:37:12	-- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]]
00:03:41.511   23:37:12	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:03:41.511   23:37:12	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:03:41.511   23:37:12	-- setup/acl.sh@22 -- # devs+=("$dev")
00:03:41.511   23:37:12	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:03:41.511   23:37:12	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:41.511   23:37:12	-- setup/acl.sh@24 -- # (( 1 > 0 ))
00:03:41.511   23:37:12	-- setup/acl.sh@54 -- # run_test denied denied
00:03:41.511   23:37:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:41.511   23:37:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:41.511   23:37:12	-- common/autotest_common.sh@10 -- # set +x
00:03:41.770  ************************************
00:03:41.770  START TEST denied
00:03:41.770  ************************************
00:03:41.770   23:37:12	-- common/autotest_common.sh@1114 -- # denied
00:03:41.770   23:37:12	-- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0'
00:03:41.770   23:37:12	-- setup/acl.sh@38 -- # setup output config
00:03:41.770   23:37:12	-- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0'
00:03:41.770   23:37:12	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:41.770   23:37:12	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:03:43.173  0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0
00:03:43.174   23:37:13	-- setup/acl.sh@40 -- # verify 0000:00:06.0
00:03:43.174   23:37:13	-- setup/acl.sh@28 -- # local dev driver
00:03:43.174   23:37:13	-- setup/acl.sh@30 -- # for dev in "$@"
00:03:43.174   23:37:13	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]]
00:03:43.174    23:37:13	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver
00:03:43.174   23:37:13	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:03:43.174   23:37:13	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:03:43.174   23:37:13	-- setup/acl.sh@41 -- # setup reset
00:03:43.174   23:37:13	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:43.174   23:37:13	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:43.466  
00:03:43.466  real	0m1.888s
00:03:43.466  user	0m0.506s
00:03:43.466  sys	0m1.432s
00:03:43.466   23:37:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:43.466   23:37:14	-- common/autotest_common.sh@10 -- # set +x
00:03:43.466  ************************************
00:03:43.466  END TEST denied
00:03:43.466  ************************************
00:03:43.466   23:37:14	-- setup/acl.sh@55 -- # run_test allowed allowed
00:03:43.466   23:37:14	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:43.466   23:37:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:43.466   23:37:14	-- common/autotest_common.sh@10 -- # set +x
00:03:43.466  ************************************
00:03:43.466  START TEST allowed
00:03:43.466  ************************************
00:03:43.466   23:37:14	-- common/autotest_common.sh@1114 -- # allowed
00:03:43.466   23:37:14	-- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0
00:03:43.466   23:37:14	-- setup/acl.sh@45 -- # setup output config
00:03:43.466   23:37:14	-- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*'
00:03:43.466   23:37:14	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:43.466   23:37:14	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:03:45.369  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:03:45.369   23:37:16	-- setup/acl.sh@47 -- # verify
00:03:45.369   23:37:16	-- setup/acl.sh@28 -- # local dev driver
00:03:45.369   23:37:16	-- setup/acl.sh@48 -- # setup reset
00:03:45.369   23:37:16	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:45.369   23:37:16	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:45.936  
00:03:45.936  real	0m2.343s
00:03:45.936  user	0m0.490s
00:03:45.936  sys	0m1.847s
00:03:45.936   23:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:45.936   23:37:16	-- common/autotest_common.sh@10 -- # set +x
00:03:45.936  ************************************
00:03:45.936  END TEST allowed
00:03:45.936  ************************************
00:03:45.936  ************************************
00:03:45.936  END TEST acl
00:03:45.936  ************************************
00:03:45.936  
00:03:45.936  real	0m5.275s
00:03:45.936  user	0m1.611s
00:03:45.936  sys	0m3.765s
00:03:45.936   23:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:45.936   23:37:16	-- common/autotest_common.sh@10 -- # set +x
00:03:45.937   23:37:16	-- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:03:45.937   23:37:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:45.937   23:37:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:45.937   23:37:16	-- common/autotest_common.sh@10 -- # set +x
00:03:45.937  ************************************
00:03:45.937  START TEST hugepages
00:03:45.937  ************************************
00:03:45.937   23:37:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:03:46.196  * Looking for test storage...
00:03:46.196  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:03:46.196     23:37:16	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:46.196      23:37:16	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:46.196      23:37:16	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:46.196     23:37:16	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:46.196     23:37:16	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:46.196     23:37:16	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:46.196     23:37:16	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:46.196     23:37:16	-- scripts/common.sh@335 -- # IFS=.-:
00:03:46.196     23:37:16	-- scripts/common.sh@335 -- # read -ra ver1
00:03:46.196     23:37:16	-- scripts/common.sh@336 -- # IFS=.-:
00:03:46.196     23:37:16	-- scripts/common.sh@336 -- # read -ra ver2
00:03:46.196     23:37:16	-- scripts/common.sh@337 -- # local 'op=<'
00:03:46.196     23:37:16	-- scripts/common.sh@339 -- # ver1_l=2
00:03:46.196     23:37:16	-- scripts/common.sh@340 -- # ver2_l=1
00:03:46.196     23:37:16	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:46.196     23:37:16	-- scripts/common.sh@343 -- # case "$op" in
00:03:46.196     23:37:16	-- scripts/common.sh@344 -- # : 1
00:03:46.196     23:37:16	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:46.196     23:37:16	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:46.196      23:37:16	-- scripts/common.sh@364 -- # decimal 1
00:03:46.196      23:37:16	-- scripts/common.sh@352 -- # local d=1
00:03:46.196      23:37:16	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:46.196      23:37:16	-- scripts/common.sh@354 -- # echo 1
00:03:46.196     23:37:16	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:46.196      23:37:16	-- scripts/common.sh@365 -- # decimal 2
00:03:46.196      23:37:16	-- scripts/common.sh@352 -- # local d=2
00:03:46.197      23:37:16	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:46.197      23:37:16	-- scripts/common.sh@354 -- # echo 2
00:03:46.197     23:37:16	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:46.197     23:37:16	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:46.197     23:37:16	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:46.197     23:37:16	-- scripts/common.sh@367 -- # return 0
00:03:46.197     23:37:16	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:46.197     23:37:16	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:46.197  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:46.197  		--rc genhtml_branch_coverage=1
00:03:46.197  		--rc genhtml_function_coverage=1
00:03:46.197  		--rc genhtml_legend=1
00:03:46.197  		--rc geninfo_all_blocks=1
00:03:46.197  		--rc geninfo_unexecuted_blocks=1
00:03:46.197  		
00:03:46.197  		'
00:03:46.197     23:37:16	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:46.197  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:46.197  		--rc genhtml_branch_coverage=1
00:03:46.197  		--rc genhtml_function_coverage=1
00:03:46.197  		--rc genhtml_legend=1
00:03:46.197  		--rc geninfo_all_blocks=1
00:03:46.197  		--rc geninfo_unexecuted_blocks=1
00:03:46.197  		
00:03:46.197  		'
00:03:46.197     23:37:16	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:46.197  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:46.197  		--rc genhtml_branch_coverage=1
00:03:46.197  		--rc genhtml_function_coverage=1
00:03:46.197  		--rc genhtml_legend=1
00:03:46.197  		--rc geninfo_all_blocks=1
00:03:46.197  		--rc geninfo_unexecuted_blocks=1
00:03:46.197  		
00:03:46.197  		'
00:03:46.197     23:37:16	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:46.197  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:46.197  		--rc genhtml_branch_coverage=1
00:03:46.197  		--rc genhtml_function_coverage=1
00:03:46.197  		--rc genhtml_legend=1
00:03:46.197  		--rc geninfo_all_blocks=1
00:03:46.197  		--rc geninfo_unexecuted_blocks=1
00:03:46.197  		
00:03:46.197  		'
00:03:46.197   23:37:16	-- setup/hugepages.sh@10 -- # nodes_sys=()
00:03:46.197   23:37:16	-- setup/hugepages.sh@10 -- # declare -a nodes_sys
00:03:46.197   23:37:16	-- setup/hugepages.sh@12 -- # declare -i default_hugepages=0
00:03:46.197   23:37:16	-- setup/hugepages.sh@13 -- # declare -i no_nodes=0
00:03:46.197   23:37:16	-- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0
00:03:46.197    23:37:16	-- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize
00:03:46.197    23:37:16	-- setup/common.sh@17 -- # local get=Hugepagesize
00:03:46.197    23:37:16	-- setup/common.sh@18 -- # local node=
00:03:46.197    23:37:16	-- setup/common.sh@19 -- # local var val
00:03:46.197    23:37:16	-- setup/common.sh@20 -- # local mem_f mem
00:03:46.197    23:37:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:46.197    23:37:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:46.197    23:37:16	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:46.197    23:37:16	-- setup/common.sh@28 -- # mapfile -t mem
00:03:46.197    23:37:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197     23:37:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         2951332 kB' 'MemAvailable:    7387376 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002716 kB' 'Inactive:        3702568 kB' 'Active(anon):       1100 kB' 'Inactive(anon):   141296 kB' 'Active(file):    1001616 kB' 'Inactive(file):  3561272 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               668 kB' 'Writeback:             4 kB' 'AnonPages:        160104 kB' 'Mapped:            68384 kB' 'Shmem:              2600 kB' 'KReclaimable:     194460 kB' 'Slab:             258968 kB' 'SReclaimable:     194460 kB' 'SUnreclaim:        64508 kB' 'KernelStack:        4456 kB' 'PageTables:         3852 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     4024336 kB' 'Committed_AS:     506600 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19476 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    2048' 'HugePages_Free:     2048' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         4194304 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.197    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.197    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # continue
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # IFS=': '
00:03:46.198    23:37:16	-- setup/common.sh@31 -- # read -r var val _
00:03:46.198    23:37:16	-- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:03:46.198    23:37:16	-- setup/common.sh@33 -- # echo 2048
00:03:46.198    23:37:16	-- setup/common.sh@33 -- # return 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@16 -- # default_hugepages=2048
00:03:46.198   23:37:16	-- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
00:03:46.198   23:37:16	-- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages
00:03:46.198   23:37:16	-- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC
00:03:46.198   23:37:16	-- setup/hugepages.sh@22 -- # unset -v HUGEMEM
00:03:46.198   23:37:16	-- setup/hugepages.sh@23 -- # unset -v HUGENODE
00:03:46.198   23:37:16	-- setup/hugepages.sh@24 -- # unset -v NRHUGE
00:03:46.198   23:37:16	-- setup/hugepages.sh@207 -- # get_nodes
00:03:46.198   23:37:16	-- setup/hugepages.sh@27 -- # local node
00:03:46.198   23:37:16	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:46.198   23:37:16	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048
00:03:46.198   23:37:16	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:46.198   23:37:16	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:46.198   23:37:16	-- setup/hugepages.sh@208 -- # clear_hp
00:03:46.198   23:37:16	-- setup/hugepages.sh@37 -- # local node hp
00:03:46.198   23:37:16	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:03:46.198   23:37:16	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:03:46.198   23:37:16	-- setup/hugepages.sh@41 -- # echo 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:03:46.198   23:37:16	-- setup/hugepages.sh@41 -- # echo 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:03:46.198   23:37:16	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:03:46.198   23:37:16	-- setup/hugepages.sh@210 -- # run_test default_setup default_setup
00:03:46.198   23:37:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:46.198   23:37:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:46.198   23:37:16	-- common/autotest_common.sh@10 -- # set +x
00:03:46.198  ************************************
00:03:46.198  START TEST default_setup
00:03:46.198  ************************************
00:03:46.198   23:37:16	-- common/autotest_common.sh@1114 -- # default_setup
00:03:46.198   23:37:16	-- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@49 -- # local size=2097152
00:03:46.198   23:37:16	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:03:46.198   23:37:16	-- setup/hugepages.sh@51 -- # shift
00:03:46.198   23:37:16	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:03:46.198   23:37:16	-- setup/hugepages.sh@52 -- # local node_ids
00:03:46.198   23:37:16	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:46.198   23:37:16	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:03:46.198   23:37:16	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:03:46.198   23:37:16	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:46.198   23:37:16	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:03:46.198   23:37:16	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:46.198   23:37:16	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:46.198   23:37:16	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:46.198   23:37:16	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:03:46.198   23:37:16	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:03:46.198   23:37:16	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:03:46.198   23:37:16	-- setup/hugepages.sh@73 -- # return 0
00:03:46.198   23:37:16	-- setup/hugepages.sh@137 -- # setup output
00:03:46.198   23:37:16	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:46.198   23:37:16	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:46.766  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:46.766  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:03:47.338   23:37:17	-- setup/hugepages.sh@138 -- # verify_nr_hugepages
00:03:47.338   23:37:17	-- setup/hugepages.sh@89 -- # local node
00:03:47.338   23:37:17	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:47.338   23:37:17	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:47.338   23:37:17	-- setup/hugepages.sh@92 -- # local surp
00:03:47.338   23:37:17	-- setup/hugepages.sh@93 -- # local resv
00:03:47.338   23:37:17	-- setup/hugepages.sh@94 -- # local anon
00:03:47.338   23:37:17	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:47.338    23:37:17	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:47.338    23:37:17	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:47.338    23:37:17	-- setup/common.sh@18 -- # local node=
00:03:47.338    23:37:17	-- setup/common.sh@19 -- # local var val
00:03:47.338    23:37:17	-- setup/common.sh@20 -- # local mem_f mem
00:03:47.338    23:37:17	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:47.338    23:37:17	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:47.338    23:37:17	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:47.338    23:37:17	-- setup/common.sh@28 -- # mapfile -t mem
00:03:47.338    23:37:17	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338     23:37:17	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5048692 kB' 'MemAvailable:    9484640 kB' 'Buffers:           35200 kB' 'Cached:          4539348 kB' 'SwapCached:            0 kB' 'Active:          1002724 kB' 'Inactive:        3704320 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   143072 kB' 'Active(file):    1001636 kB' 'Inactive(file):  3561248 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161756 kB' 'Mapped:            68132 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258764 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64396 kB' 'KernelStack:        4384 kB' 'PageTables:         3624 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19492 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.338    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.338    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:47.339    23:37:17	-- setup/common.sh@33 -- # echo 0
00:03:47.339    23:37:17	-- setup/common.sh@33 -- # return 0
00:03:47.339   23:37:17	-- setup/hugepages.sh@97 -- # anon=0
00:03:47.339    23:37:17	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:47.339    23:37:17	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:47.339    23:37:17	-- setup/common.sh@18 -- # local node=
00:03:47.339    23:37:17	-- setup/common.sh@19 -- # local var val
00:03:47.339    23:37:17	-- setup/common.sh@20 -- # local mem_f mem
00:03:47.339    23:37:17	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:47.339    23:37:17	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:47.339    23:37:17	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:47.339    23:37:17	-- setup/common.sh@28 -- # mapfile -t mem
00:03:47.339    23:37:17	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339     23:37:17	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5049476 kB' 'MemAvailable:    9485424 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002724 kB' 'Inactive:        3704028 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   142780 kB' 'Active(file):    1001636 kB' 'Inactive(file):  3561248 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161480 kB' 'Mapped:            68132 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258764 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64396 kB' 'KernelStack:        4416 kB' 'PageTables:         3708 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19492 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.339    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.339    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.340    23:37:17	-- setup/common.sh@33 -- # echo 0
00:03:47.340    23:37:17	-- setup/common.sh@33 -- # return 0
00:03:47.340   23:37:17	-- setup/hugepages.sh@99 -- # surp=0
00:03:47.340    23:37:17	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:47.340    23:37:17	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:47.340    23:37:17	-- setup/common.sh@18 -- # local node=
00:03:47.340    23:37:17	-- setup/common.sh@19 -- # local var val
00:03:47.340    23:37:17	-- setup/common.sh@20 -- # local mem_f mem
00:03:47.340    23:37:17	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:47.340    23:37:17	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:47.340    23:37:17	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:47.340    23:37:17	-- setup/common.sh@28 -- # mapfile -t mem
00:03:47.340    23:37:17	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340     23:37:17	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5049476 kB' 'MemAvailable:    9485424 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002716 kB' 'Inactive:        3704240 kB' 'Active(anon):       1080 kB' 'Inactive(anon):   142992 kB' 'Active(file):    1001636 kB' 'Inactive(file):  3561248 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161644 kB' 'Mapped:            68116 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258796 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64428 kB' 'KernelStack:        4464 kB' 'PageTables:         3804 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19492 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.340    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.340    23:37:17	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.341    23:37:17	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:47.341    23:37:17	-- setup/common.sh@33 -- # echo 0
00:03:47.341    23:37:17	-- setup/common.sh@33 -- # return 0
00:03:47.341   23:37:17	-- setup/hugepages.sh@100 -- # resv=0
00:03:47.341  nr_hugepages=1024
00:03:47.341   23:37:17	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:03:47.341  resv_hugepages=0
00:03:47.341   23:37:17	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:47.341  surplus_hugepages=0
00:03:47.341   23:37:17	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:47.341  anon_hugepages=0
00:03:47.341   23:37:17	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:47.341   23:37:17	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:47.341   23:37:17	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:03:47.341    23:37:17	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:47.341    23:37:17	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:47.341    23:37:17	-- setup/common.sh@18 -- # local node=
00:03:47.341    23:37:17	-- setup/common.sh@19 -- # local var val
00:03:47.341    23:37:17	-- setup/common.sh@20 -- # local mem_f mem
00:03:47.341    23:37:17	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:47.341    23:37:17	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:47.341    23:37:17	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:47.341    23:37:17	-- setup/common.sh@28 -- # mapfile -t mem
00:03:47.341    23:37:17	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:47.341    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342     23:37:17	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5050456 kB' 'MemAvailable:    9486408 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002716 kB' 'Inactive:        3703932 kB' 'Active(anon):       1080 kB' 'Inactive(anon):   142680 kB' 'Active(file):    1001636 kB' 'Inactive(file):  3561252 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161328 kB' 'Mapped:            68076 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258796 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64428 kB' 'KernelStack:        4404 kB' 'PageTables:         3704 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19492 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:17	-- setup/common.sh@31 -- # read -r var val _
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.342    23:37:17	-- setup/common.sh@32 -- # continue
00:03:47.342    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.342    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:47.343    23:37:18	-- setup/common.sh@33 -- # echo 1024
00:03:47.343    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:47.343   23:37:18	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:47.343   23:37:18	-- setup/hugepages.sh@112 -- # get_nodes
00:03:47.343   23:37:18	-- setup/hugepages.sh@27 -- # local node
00:03:47.343   23:37:18	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:47.343   23:37:18	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:03:47.343   23:37:18	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:47.343   23:37:18	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:47.343   23:37:18	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:47.343   23:37:18	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:47.343    23:37:18	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:47.343    23:37:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:47.343    23:37:18	-- setup/common.sh@18 -- # local node=0
00:03:47.343    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:47.343    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:47.343    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:47.343    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:47.343    23:37:18	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:47.343    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:47.343    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5050456 kB' 'MemUsed:         7192524 kB' 'SwapCached:            0 kB' 'Active:          1002716 kB' 'Inactive:        3704452 kB' 'Active(anon):       1080 kB' 'Inactive(anon):   143200 kB' 'Active(file):    1001636 kB' 'Inactive(file):  3561252 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'FilePages:       4574552 kB' 'Mapped:            68076 kB' 'AnonPages:        161848 kB' 'Shmem:              2596 kB' 'KernelStack:        4472 kB' 'PageTables:         3704 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194368 kB' 'Slab:             258796 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64428 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.343    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.343    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # continue
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:47.344    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:47.344    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:47.344    23:37:18	-- setup/common.sh@33 -- # echo 0
00:03:47.344    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:47.344   23:37:18	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:47.344   23:37:18	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:47.344   23:37:18	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:47.344   23:37:18	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:47.344  node0=1024 expecting 1024
00:03:47.344   23:37:18	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:03:47.344   23:37:18	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:03:47.344  
00:03:47.344  real	0m1.190s
00:03:47.344  user	0m0.387s
00:03:47.344  sys	0m0.787s
00:03:47.344   23:37:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:47.344   23:37:18	-- common/autotest_common.sh@10 -- # set +x
00:03:47.344  ************************************
00:03:47.344  END TEST default_setup
00:03:47.344  ************************************
00:03:47.603   23:37:18	-- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc
00:03:47.603   23:37:18	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:47.603   23:37:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:47.603   23:37:18	-- common/autotest_common.sh@10 -- # set +x
00:03:47.603  ************************************
00:03:47.603  START TEST per_node_1G_alloc
00:03:47.603  ************************************
00:03:47.603   23:37:18	-- common/autotest_common.sh@1114 -- # per_node_1G_alloc
00:03:47.603   23:37:18	-- setup/hugepages.sh@143 -- # local IFS=,
00:03:47.603   23:37:18	-- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0
00:03:47.603   23:37:18	-- setup/hugepages.sh@49 -- # local size=1048576
00:03:47.603   23:37:18	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:03:47.603   23:37:18	-- setup/hugepages.sh@51 -- # shift
00:03:47.603   23:37:18	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:03:47.603   23:37:18	-- setup/hugepages.sh@52 -- # local node_ids
00:03:47.603   23:37:18	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:47.603   23:37:18	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:03:47.603   23:37:18	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:03:47.603   23:37:18	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:03:47.603   23:37:18	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:47.603   23:37:18	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:03:47.603   23:37:18	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:47.603   23:37:18	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:47.603   23:37:18	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:47.603   23:37:18	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:03:47.603   23:37:18	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:03:47.603   23:37:18	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:03:47.603   23:37:18	-- setup/hugepages.sh@73 -- # return 0
00:03:47.603   23:37:18	-- setup/hugepages.sh@146 -- # NRHUGE=512
00:03:47.603   23:37:18	-- setup/hugepages.sh@146 -- # HUGENODE=0
00:03:47.603   23:37:18	-- setup/hugepages.sh@146 -- # setup output
00:03:47.603   23:37:18	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:47.603   23:37:18	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:47.862  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:47.862  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:48.122   23:37:18	-- setup/hugepages.sh@147 -- # nr_hugepages=512
00:03:48.122   23:37:18	-- setup/hugepages.sh@147 -- # verify_nr_hugepages
00:03:48.122   23:37:18	-- setup/hugepages.sh@89 -- # local node
00:03:48.122   23:37:18	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:48.122   23:37:18	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:48.122   23:37:18	-- setup/hugepages.sh@92 -- # local surp
00:03:48.122   23:37:18	-- setup/hugepages.sh@93 -- # local resv
00:03:48.122   23:37:18	-- setup/hugepages.sh@94 -- # local anon
00:03:48.122   23:37:18	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:48.122    23:37:18	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:48.122    23:37:18	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:48.122    23:37:18	-- setup/common.sh@18 -- # local node=
00:03:48.122    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:48.122    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:48.122    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:48.122    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:48.122    23:37:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:48.122    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:48.122    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6091972 kB' 'MemAvailable:   10527928 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002724 kB' 'Inactive:        3704300 kB' 'Active(anon):       1080 kB' 'Inactive(anon):   143052 kB' 'Active(file):    1001644 kB' 'Inactive(file):  3561248 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161648 kB' 'Mapped:            68120 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258660 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64292 kB' 'KernelStack:        4436 kB' 'PageTables:         3556 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19476 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.122    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.122    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:48.123    23:37:18	-- setup/common.sh@33 -- # echo 0
00:03:48.123    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:48.123   23:37:18	-- setup/hugepages.sh@97 -- # anon=0
00:03:48.123    23:37:18	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:48.123    23:37:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:48.123    23:37:18	-- setup/common.sh@18 -- # local node=
00:03:48.123    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:48.123    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:48.123    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:48.123    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:48.123    23:37:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:48.123    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:48.123    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6092200 kB' 'MemAvailable:   10528156 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002736 kB' 'Inactive:        3704224 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   142980 kB' 'Active(file):    1001648 kB' 'Inactive(file):  3561244 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161864 kB' 'Mapped:            68092 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258660 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64292 kB' 'KernelStack:        4380 kB' 'PageTables:         3436 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19492 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.123    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.123    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.124    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.124    23:37:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.388    23:37:18	-- setup/common.sh@33 -- # echo 0
00:03:48.388    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:48.388   23:37:18	-- setup/hugepages.sh@99 -- # surp=0
00:03:48.388    23:37:18	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:48.388    23:37:18	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:48.388    23:37:18	-- setup/common.sh@18 -- # local node=
00:03:48.388    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:48.388    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:48.388    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:48.388    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:48.388    23:37:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:48.388    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:48.388    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6092200 kB' 'MemAvailable:   10528156 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002736 kB' 'Inactive:        3703984 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   142740 kB' 'Active(file):    1001648 kB' 'Inactive(file):  3561244 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161368 kB' 'Mapped:            68092 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258660 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64292 kB' 'KernelStack:        4360 kB' 'PageTables:         3564 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19508 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.388    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.388    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.389    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.389    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:48.389    23:37:18	-- setup/common.sh@33 -- # echo 0
00:03:48.389    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:48.389   23:37:18	-- setup/hugepages.sh@100 -- # resv=0
00:03:48.389  nr_hugepages=512
00:03:48.389   23:37:18	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:03:48.389  resv_hugepages=0
00:03:48.389   23:37:18	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:48.389  surplus_hugepages=0
00:03:48.389   23:37:18	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:48.389  anon_hugepages=0
00:03:48.389   23:37:18	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:48.389   23:37:18	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:03:48.389   23:37:18	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:03:48.389    23:37:18	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:48.389    23:37:18	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:48.390    23:37:18	-- setup/common.sh@18 -- # local node=
00:03:48.390    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:48.390    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:48.390    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:48.390    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:48.390    23:37:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:48.390    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:48.390    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6092200 kB' 'MemAvailable:   10528156 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002736 kB' 'Inactive:        3704192 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   142948 kB' 'Active(file):    1001648 kB' 'Inactive(file):  3561244 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161576 kB' 'Mapped:            68092 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258660 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64292 kB' 'KernelStack:        4412 kB' 'PageTables:         3772 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     508704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19508 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.390    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.390    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:48.391    23:37:18	-- setup/common.sh@33 -- # echo 512
00:03:48.391    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:48.391   23:37:18	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:03:48.391   23:37:18	-- setup/hugepages.sh@112 -- # get_nodes
00:03:48.391   23:37:18	-- setup/hugepages.sh@27 -- # local node
00:03:48.391   23:37:18	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:48.391   23:37:18	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:03:48.391   23:37:18	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:48.391   23:37:18	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:48.391   23:37:18	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:48.391   23:37:18	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:48.391    23:37:18	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:48.391    23:37:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:48.391    23:37:18	-- setup/common.sh@18 -- # local node=0
00:03:48.391    23:37:18	-- setup/common.sh@19 -- # local var val
00:03:48.391    23:37:18	-- setup/common.sh@20 -- # local mem_f mem
00:03:48.391    23:37:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:48.391    23:37:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:48.391    23:37:18	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:48.391    23:37:18	-- setup/common.sh@28 -- # mapfile -t mem
00:03:48.391    23:37:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391     23:37:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6092724 kB' 'MemUsed:         6150256 kB' 'SwapCached:            0 kB' 'Active:          1002736 kB' 'Inactive:        3703932 kB' 'Active(anon):       1088 kB' 'Inactive(anon):   142688 kB' 'Active(file):    1001648 kB' 'Inactive(file):  3561244 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'FilePages:       4574552 kB' 'Mapped:            68092 kB' 'AnonPages:        161316 kB' 'Shmem:              2596 kB' 'KernelStack:        4412 kB' 'PageTables:         3772 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194368 kB' 'Slab:             258660 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64292 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.391    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.391    23:37:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # continue
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # IFS=': '
00:03:48.392    23:37:18	-- setup/common.sh@31 -- # read -r var val _
00:03:48.392    23:37:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:48.392    23:37:18	-- setup/common.sh@33 -- # echo 0
00:03:48.392    23:37:18	-- setup/common.sh@33 -- # return 0
00:03:48.392   23:37:18	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:48.392   23:37:18	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:48.392   23:37:18	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:48.392  node0=512 expecting 512
00:03:48.392   23:37:18	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:03:48.392   23:37:18	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:03:48.392  
00:03:48.392  real	0m0.848s
00:03:48.392  user	0m0.345s
00:03:48.392  sys	0m0.543s
00:03:48.392   23:37:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:48.392   23:37:18	-- common/autotest_common.sh@10 -- # set +x
00:03:48.392  ************************************
00:03:48.392  END TEST per_node_1G_alloc
00:03:48.392  ************************************
00:03:48.392   23:37:18	-- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc
00:03:48.392   23:37:18	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:48.392   23:37:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:48.392   23:37:18	-- common/autotest_common.sh@10 -- # set +x
00:03:48.392  ************************************
00:03:48.392  START TEST even_2G_alloc
00:03:48.392  ************************************
00:03:48.392   23:37:18	-- common/autotest_common.sh@1114 -- # even_2G_alloc
00:03:48.392   23:37:18	-- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152
00:03:48.392   23:37:18	-- setup/hugepages.sh@49 -- # local size=2097152
00:03:48.392   23:37:18	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:03:48.392   23:37:18	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:03:48.392   23:37:18	-- setup/hugepages.sh@62 -- # user_nodes=()
00:03:48.392   23:37:18	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:48.392   23:37:18	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:03:48.392   23:37:18	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:48.392   23:37:18	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:48.392   23:37:18	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:48.392   23:37:18	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024
00:03:48.392   23:37:18	-- setup/hugepages.sh@83 -- # : 0
00:03:48.392   23:37:18	-- setup/hugepages.sh@84 -- # : 0
00:03:48.392   23:37:18	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:48.392   23:37:18	-- setup/hugepages.sh@153 -- # NRHUGE=1024
00:03:48.392   23:37:18	-- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes
00:03:48.392   23:37:18	-- setup/hugepages.sh@153 -- # setup output
00:03:48.392   23:37:18	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:48.392   23:37:18	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:48.651  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:48.651  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:49.590   23:37:20	-- setup/hugepages.sh@154 -- # verify_nr_hugepages
00:03:49.590   23:37:20	-- setup/hugepages.sh@89 -- # local node
00:03:49.590   23:37:20	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:49.590   23:37:20	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:49.590   23:37:20	-- setup/hugepages.sh@92 -- # local surp
00:03:49.590   23:37:20	-- setup/hugepages.sh@93 -- # local resv
00:03:49.590   23:37:20	-- setup/hugepages.sh@94 -- # local anon
00:03:49.590   23:37:20	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:49.590    23:37:20	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:49.590    23:37:20	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:49.590    23:37:20	-- setup/common.sh@18 -- # local node=
00:03:49.590    23:37:20	-- setup/common.sh@19 -- # local var val
00:03:49.590    23:37:20	-- setup/common.sh@20 -- # local mem_f mem
00:03:49.590    23:37:20	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:49.590    23:37:20	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:49.590    23:37:20	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:49.590    23:37:20	-- setup/common.sh@28 -- # mapfile -t mem
00:03:49.590    23:37:20	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:49.590    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.590    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591     23:37:20	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047176 kB' 'MemAvailable:    9483132 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002768 kB' 'Inactive:        3704296 kB' 'Active(anon):       1096 kB' 'Inactive(anon):   143076 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561220 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161712 kB' 'Mapped:            68080 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258668 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64300 kB' 'KernelStack:        4428 kB' 'PageTables:         3492 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508904 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19508 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.591    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.591    23:37:20	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:49.591    23:37:20	-- setup/common.sh@33 -- # echo 0
00:03:49.591    23:37:20	-- setup/common.sh@33 -- # return 0
00:03:49.591   23:37:20	-- setup/hugepages.sh@97 -- # anon=0
00:03:49.591    23:37:20	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:49.591    23:37:20	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:49.591    23:37:20	-- setup/common.sh@18 -- # local node=
00:03:49.591    23:37:20	-- setup/common.sh@19 -- # local var val
00:03:49.591    23:37:20	-- setup/common.sh@20 -- # local mem_f mem
00:03:49.592    23:37:20	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:49.592    23:37:20	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:49.592    23:37:20	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:49.592    23:37:20	-- setup/common.sh@28 -- # mapfile -t mem
00:03:49.592    23:37:20	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592     23:37:20	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047428 kB' 'MemAvailable:    9483384 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002776 kB' 'Inactive:        3704068 kB' 'Active(anon):       1104 kB' 'Inactive(anon):   142848 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561220 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161248 kB' 'Mapped:            68040 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258668 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64300 kB' 'KernelStack:        4380 kB' 'PageTables:         3376 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508904 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19508 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.592    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.592    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.593    23:37:20	-- setup/common.sh@33 -- # echo 0
00:03:49.593    23:37:20	-- setup/common.sh@33 -- # return 0
00:03:49.593   23:37:20	-- setup/hugepages.sh@99 -- # surp=0
00:03:49.593    23:37:20	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:49.593    23:37:20	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:49.593    23:37:20	-- setup/common.sh@18 -- # local node=
00:03:49.593    23:37:20	-- setup/common.sh@19 -- # local var val
00:03:49.593    23:37:20	-- setup/common.sh@20 -- # local mem_f mem
00:03:49.593    23:37:20	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:49.593    23:37:20	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:49.593    23:37:20	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:49.593    23:37:20	-- setup/common.sh@28 -- # mapfile -t mem
00:03:49.593    23:37:20	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593     23:37:20	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047932 kB' 'MemAvailable:    9483888 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002768 kB' 'Inactive:        3704180 kB' 'Active(anon):       1096 kB' 'Inactive(anon):   142960 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561220 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161588 kB' 'Mapped:            68040 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258668 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64300 kB' 'KernelStack:        4328 kB' 'PageTables:         3404 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508904 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19508 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.593    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.593    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:49.594    23:37:20	-- setup/common.sh@33 -- # echo 0
00:03:49.594    23:37:20	-- setup/common.sh@33 -- # return 0
00:03:49.594   23:37:20	-- setup/hugepages.sh@100 -- # resv=0
00:03:49.594  nr_hugepages=1024
00:03:49.594   23:37:20	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:03:49.594  resv_hugepages=0
00:03:49.594   23:37:20	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:49.594  surplus_hugepages=0
00:03:49.594   23:37:20	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:49.594  anon_hugepages=0
00:03:49.594   23:37:20	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:49.594   23:37:20	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:49.594   23:37:20	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:03:49.594    23:37:20	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:49.594    23:37:20	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:49.594    23:37:20	-- setup/common.sh@18 -- # local node=
00:03:49.594    23:37:20	-- setup/common.sh@19 -- # local var val
00:03:49.594    23:37:20	-- setup/common.sh@20 -- # local mem_f mem
00:03:49.594    23:37:20	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:49.594    23:37:20	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:49.594    23:37:20	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:49.594    23:37:20	-- setup/common.sh@28 -- # mapfile -t mem
00:03:49.594    23:37:20	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594     23:37:20	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047432 kB' 'MemAvailable:    9483388 kB' 'Buffers:           35200 kB' 'Cached:          4539352 kB' 'SwapCached:            0 kB' 'Active:          1002768 kB' 'Inactive:        3704016 kB' 'Active(anon):       1096 kB' 'Inactive(anon):   142796 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561220 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        161456 kB' 'Mapped:            68040 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258668 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64300 kB' 'KernelStack:        4392 kB' 'PageTables:         3612 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     508904 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19524 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.594    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.594    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.595    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.595    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:49.595    23:37:20	-- setup/common.sh@33 -- # echo 1024
00:03:49.595    23:37:20	-- setup/common.sh@33 -- # return 0
00:03:49.595   23:37:20	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:49.595   23:37:20	-- setup/hugepages.sh@112 -- # get_nodes
00:03:49.595   23:37:20	-- setup/hugepages.sh@27 -- # local node
00:03:49.595   23:37:20	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:49.595   23:37:20	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:03:49.595   23:37:20	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:49.595   23:37:20	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:49.595   23:37:20	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:49.595   23:37:20	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:49.595    23:37:20	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:49.595    23:37:20	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:49.595    23:37:20	-- setup/common.sh@18 -- # local node=0
00:03:49.595    23:37:20	-- setup/common.sh@19 -- # local var val
00:03:49.595    23:37:20	-- setup/common.sh@20 -- # local mem_f mem
00:03:49.595    23:37:20	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:49.595    23:37:20	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:49.595    23:37:20	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:49.595    23:37:20	-- setup/common.sh@28 -- # mapfile -t mem
00:03:49.596    23:37:20	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596     23:37:20	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047432 kB' 'MemUsed:         7195548 kB' 'SwapCached:            0 kB' 'Active:          1002768 kB' 'Inactive:        3704016 kB' 'Active(anon):       1096 kB' 'Inactive(anon):   142796 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561220 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'FilePages:       4574552 kB' 'Mapped:            68040 kB' 'AnonPages:        161456 kB' 'Shmem:              2596 kB' 'KernelStack:        4460 kB' 'PageTables:         3872 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194368 kB' 'Slab:             258668 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64300 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # continue
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # IFS=': '
00:03:49.596    23:37:20	-- setup/common.sh@31 -- # read -r var val _
00:03:49.596    23:37:20	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:49.596    23:37:20	-- setup/common.sh@33 -- # echo 0
00:03:49.596    23:37:20	-- setup/common.sh@33 -- # return 0
00:03:49.596   23:37:20	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:49.596   23:37:20	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:49.596   23:37:20	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:49.596   23:37:20	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:49.596  node0=1024 expecting 1024
00:03:49.596   23:37:20	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:03:49.596   23:37:20	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:03:49.596  
00:03:49.596  real	0m1.162s
00:03:49.596  user	0m0.276s
00:03:49.596  sys	0m0.924s
00:03:49.596   23:37:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:49.596   23:37:20	-- common/autotest_common.sh@10 -- # set +x
00:03:49.596  ************************************
00:03:49.596  END TEST even_2G_alloc
00:03:49.596  ************************************
00:03:49.596   23:37:20	-- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc
00:03:49.596   23:37:20	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:49.597   23:37:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:49.597   23:37:20	-- common/autotest_common.sh@10 -- # set +x
00:03:49.597  ************************************
00:03:49.597  START TEST odd_alloc
00:03:49.597  ************************************
00:03:49.597   23:37:20	-- common/autotest_common.sh@1114 -- # odd_alloc
00:03:49.597   23:37:20	-- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176
00:03:49.597   23:37:20	-- setup/hugepages.sh@49 -- # local size=2098176
00:03:49.597   23:37:20	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@57 -- # nr_hugepages=1025
00:03:49.597   23:37:20	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:03:49.597   23:37:20	-- setup/hugepages.sh@62 -- # user_nodes=()
00:03:49.597   23:37:20	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:49.597   23:37:20	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1025
00:03:49.597   23:37:20	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:49.597   23:37:20	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:49.597   23:37:20	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:49.597   23:37:20	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025
00:03:49.597   23:37:20	-- setup/hugepages.sh@83 -- # : 0
00:03:49.597   23:37:20	-- setup/hugepages.sh@84 -- # : 0
00:03:49.597   23:37:20	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:49.597   23:37:20	-- setup/hugepages.sh@160 -- # HUGEMEM=2049
00:03:49.597   23:37:20	-- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes
00:03:49.597   23:37:20	-- setup/hugepages.sh@160 -- # setup output
00:03:49.597   23:37:20	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:49.597   23:37:20	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:49.856  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:49.856  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:50.795   23:37:21	-- setup/hugepages.sh@161 -- # verify_nr_hugepages
00:03:50.795   23:37:21	-- setup/hugepages.sh@89 -- # local node
00:03:50.795   23:37:21	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:50.795   23:37:21	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:50.795   23:37:21	-- setup/hugepages.sh@92 -- # local surp
00:03:50.795   23:37:21	-- setup/hugepages.sh@93 -- # local resv
00:03:50.795   23:37:21	-- setup/hugepages.sh@94 -- # local anon
00:03:50.795   23:37:21	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:50.795    23:37:21	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:50.795    23:37:21	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:50.795    23:37:21	-- setup/common.sh@18 -- # local node=
00:03:50.795    23:37:21	-- setup/common.sh@19 -- # local var val
00:03:50.795    23:37:21	-- setup/common.sh@20 -- # local mem_f mem
00:03:50.795    23:37:21	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:50.795    23:37:21	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:50.795    23:37:21	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:50.795    23:37:21	-- setup/common.sh@28 -- # mapfile -t mem
00:03:50.795    23:37:21	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:50.795    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796     23:37:21	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047640 kB' 'MemAvailable:    9483600 kB' 'Buffers:           35200 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002764 kB' 'Inactive:        3700044 kB' 'Active(anon):       1092 kB' 'Inactive(anon):   138820 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561224 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        157764 kB' 'Mapped:            67220 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258540 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64172 kB' 'KernelStack:        4272 kB' 'PageTables:         3300 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071888 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19428 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.796    23:37:21	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:50.796    23:37:21	-- setup/common.sh@33 -- # echo 0
00:03:50.796    23:37:21	-- setup/common.sh@33 -- # return 0
00:03:50.796   23:37:21	-- setup/hugepages.sh@97 -- # anon=0
00:03:50.796    23:37:21	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:50.796    23:37:21	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:50.796    23:37:21	-- setup/common.sh@18 -- # local node=
00:03:50.796    23:37:21	-- setup/common.sh@19 -- # local var val
00:03:50.796    23:37:21	-- setup/common.sh@20 -- # local mem_f mem
00:03:50.796    23:37:21	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:50.796    23:37:21	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:50.796    23:37:21	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:50.796    23:37:21	-- setup/common.sh@28 -- # mapfile -t mem
00:03:50.796    23:37:21	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.796    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797     23:37:21	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047640 kB' 'MemAvailable:    9483600 kB' 'Buffers:           35200 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002764 kB' 'Inactive:        3700344 kB' 'Active(anon):       1092 kB' 'Inactive(anon):   139120 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561224 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        157864 kB' 'Mapped:            67220 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258540 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64172 kB' 'KernelStack:        4304 kB' 'PageTables:         3404 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071888 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19444 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.797    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.797    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.798    23:37:21	-- setup/common.sh@33 -- # echo 0
00:03:50.798    23:37:21	-- setup/common.sh@33 -- # return 0
00:03:50.798   23:37:21	-- setup/hugepages.sh@99 -- # surp=0
00:03:50.798    23:37:21	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:50.798    23:37:21	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:50.798    23:37:21	-- setup/common.sh@18 -- # local node=
00:03:50.798    23:37:21	-- setup/common.sh@19 -- # local var val
00:03:50.798    23:37:21	-- setup/common.sh@20 -- # local mem_f mem
00:03:50.798    23:37:21	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:50.798    23:37:21	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:50.798    23:37:21	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:50.798    23:37:21	-- setup/common.sh@28 -- # mapfile -t mem
00:03:50.798    23:37:21	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798     23:37:21	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047624 kB' 'MemAvailable:    9483584 kB' 'Buffers:           35200 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002756 kB' 'Inactive:        3699980 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   138756 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561224 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        157496 kB' 'Mapped:            67220 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258612 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64244 kB' 'KernelStack:        4308 kB' 'PageTables:         3576 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071888 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19444 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.798    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.798    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:50.799    23:37:21	-- setup/common.sh@33 -- # echo 0
00:03:50.799    23:37:21	-- setup/common.sh@33 -- # return 0
00:03:50.799   23:37:21	-- setup/hugepages.sh@100 -- # resv=0
00:03:50.799   23:37:21	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1025
00:03:50.799  nr_hugepages=1025
00:03:50.799   23:37:21	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:50.799  resv_hugepages=0
00:03:50.799   23:37:21	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:50.799  surplus_hugepages=0
00:03:50.799   23:37:21	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:50.799  anon_hugepages=0
00:03:50.799   23:37:21	-- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv ))
00:03:50.799   23:37:21	-- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages ))
00:03:50.799    23:37:21	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:50.799    23:37:21	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:50.799    23:37:21	-- setup/common.sh@18 -- # local node=
00:03:50.799    23:37:21	-- setup/common.sh@19 -- # local var val
00:03:50.799    23:37:21	-- setup/common.sh@20 -- # local mem_f mem
00:03:50.799    23:37:21	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:50.799    23:37:21	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:50.799    23:37:21	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:50.799    23:37:21	-- setup/common.sh@28 -- # mapfile -t mem
00:03:50.799    23:37:21	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799     23:37:21	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047624 kB' 'MemAvailable:    9483584 kB' 'Buffers:           35200 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002764 kB' 'Inactive:        3700720 kB' 'Active(anon):       1092 kB' 'Inactive(anon):   139496 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561224 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'AnonPages:        158040 kB' 'Mapped:            67336 kB' 'Shmem:              2596 kB' 'KReclaimable:     194368 kB' 'Slab:             258612 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64244 kB' 'KernelStack:        4352 kB' 'PageTables:         3496 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071888 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19412 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.799    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.799    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:50.800    23:37:21	-- setup/common.sh@33 -- # echo 1025
00:03:50.800    23:37:21	-- setup/common.sh@33 -- # return 0
00:03:50.800   23:37:21	-- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv ))
00:03:50.800   23:37:21	-- setup/hugepages.sh@112 -- # get_nodes
00:03:50.800   23:37:21	-- setup/hugepages.sh@27 -- # local node
00:03:50.800   23:37:21	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:50.800   23:37:21	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025
00:03:50.800   23:37:21	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:50.800   23:37:21	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:50.800   23:37:21	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:50.800   23:37:21	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:50.800    23:37:21	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:50.800    23:37:21	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:50.800    23:37:21	-- setup/common.sh@18 -- # local node=0
00:03:50.800    23:37:21	-- setup/common.sh@19 -- # local var val
00:03:50.800    23:37:21	-- setup/common.sh@20 -- # local mem_f mem
00:03:50.800    23:37:21	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:50.800    23:37:21	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:50.800    23:37:21	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:50.800    23:37:21	-- setup/common.sh@28 -- # mapfile -t mem
00:03:50.800    23:37:21	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.800     23:37:21	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5047628 kB' 'MemUsed:         7195352 kB' 'SwapCached:            0 kB' 'Active:          1002756 kB' 'Inactive:        3699644 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   138416 kB' 'Active(file):    1001672 kB' 'Inactive(file):  3561228 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               656 kB' 'Writeback:             0 kB' 'FilePages:       4574560 kB' 'Mapped:            67228 kB' 'AnonPages:        157328 kB' 'Shmem:              2596 kB' 'KernelStack:        4308 kB' 'PageTables:         3432 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194368 kB' 'Slab:             258612 kB' 'SReclaimable:     194368 kB' 'SUnreclaim:        64244 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1025' 'HugePages_Free:   1025' 'HugePages_Surp:      0'
00:03:50.800    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.800    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # continue
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # IFS=': '
00:03:50.801    23:37:21	-- setup/common.sh@31 -- # read -r var val _
00:03:50.801    23:37:21	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:50.801    23:37:21	-- setup/common.sh@33 -- # echo 0
00:03:50.801    23:37:21	-- setup/common.sh@33 -- # return 0
00:03:50.801   23:37:21	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:50.801   23:37:21	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:50.801   23:37:21	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:50.801   23:37:21	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:50.801   23:37:21	-- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025'
00:03:50.801  node0=1025 expecting 1025
00:03:50.801   23:37:21	-- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]]
00:03:50.801  
00:03:50.801  real	0m1.203s
00:03:50.801  user	0m0.319s
00:03:50.801  sys	0m0.861s
00:03:50.801   23:37:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:50.801   23:37:21	-- common/autotest_common.sh@10 -- # set +x
00:03:50.801  ************************************
00:03:50.801  END TEST odd_alloc
00:03:50.801  ************************************
00:03:50.801   23:37:21	-- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc
00:03:50.801   23:37:21	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:50.801   23:37:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:50.801   23:37:21	-- common/autotest_common.sh@10 -- # set +x
00:03:50.801  ************************************
00:03:50.801  START TEST custom_alloc
00:03:50.801  ************************************
00:03:50.801   23:37:21	-- common/autotest_common.sh@1114 -- # custom_alloc
00:03:50.801   23:37:21	-- setup/hugepages.sh@167 -- # local IFS=,
00:03:50.801   23:37:21	-- setup/hugepages.sh@169 -- # local node
00:03:50.801   23:37:21	-- setup/hugepages.sh@170 -- # nodes_hp=()
00:03:50.801   23:37:21	-- setup/hugepages.sh@170 -- # local nodes_hp
00:03:50.801   23:37:21	-- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0
00:03:50.801   23:37:21	-- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576
00:03:50.801   23:37:21	-- setup/hugepages.sh@49 -- # local size=1048576
00:03:50.801   23:37:21	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:03:50.801   23:37:21	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:50.801   23:37:21	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:03:50.801   23:37:21	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:03:50.801   23:37:21	-- setup/hugepages.sh@62 -- # user_nodes=()
00:03:50.802   23:37:21	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:50.802   23:37:21	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:03:50.802   23:37:21	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:50.802   23:37:21	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:50.802   23:37:21	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:50.802   23:37:21	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:03:50.802   23:37:21	-- setup/hugepages.sh@83 -- # : 0
00:03:50.802   23:37:21	-- setup/hugepages.sh@84 -- # : 0
00:03:50.802   23:37:21	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@175 -- # nodes_hp[0]=512
00:03:50.802   23:37:21	-- setup/hugepages.sh@176 -- # (( 1 > 1 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:03:50.802   23:37:21	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:03:50.802   23:37:21	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node
00:03:50.802   23:37:21	-- setup/hugepages.sh@62 -- # user_nodes=()
00:03:50.802   23:37:21	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:50.802   23:37:21	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:03:50.802   23:37:21	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:50.802   23:37:21	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:50.802   23:37:21	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:50.802   23:37:21	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@74 -- # (( 1 > 0 ))
00:03:50.802   23:37:21	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:03:50.802   23:37:21	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:03:50.802   23:37:21	-- setup/hugepages.sh@78 -- # return 0
00:03:50.802   23:37:21	-- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512'
00:03:50.802   23:37:21	-- setup/hugepages.sh@187 -- # setup output
00:03:50.802   23:37:21	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:50.802   23:37:21	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:51.061  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:51.320  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:51.581   23:37:22	-- setup/hugepages.sh@188 -- # nr_hugepages=512
00:03:51.581   23:37:22	-- setup/hugepages.sh@188 -- # verify_nr_hugepages
00:03:51.581   23:37:22	-- setup/hugepages.sh@89 -- # local node
00:03:51.581   23:37:22	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:51.581   23:37:22	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:51.581   23:37:22	-- setup/hugepages.sh@92 -- # local surp
00:03:51.581   23:37:22	-- setup/hugepages.sh@93 -- # local resv
00:03:51.581   23:37:22	-- setup/hugepages.sh@94 -- # local anon
00:03:51.581   23:37:22	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:51.581    23:37:22	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:51.581    23:37:22	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:51.581    23:37:22	-- setup/common.sh@18 -- # local node=
00:03:51.581    23:37:22	-- setup/common.sh@19 -- # local var val
00:03:51.581    23:37:22	-- setup/common.sh@20 -- # local mem_f mem
00:03:51.581    23:37:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:51.581    23:37:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:51.581    23:37:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:51.581    23:37:22	-- setup/common.sh@28 -- # mapfile -t mem
00:03:51.581    23:37:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581     23:37:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6101644 kB' 'MemAvailable:   10537592 kB' 'Buffers:           35208 kB' 'Cached:          4539364 kB' 'SwapCached:            0 kB' 'Active:          1002828 kB' 'Inactive:        3699988 kB' 'Active(anon):       1092 kB' 'Inactive(anon):   138820 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561168 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               740 kB' 'Writeback:             0 kB' 'AnonPages:        157520 kB' 'Mapped:            67320 kB' 'Shmem:              2604 kB' 'KReclaimable:     194348 kB' 'Slab:             258512 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64164 kB' 'KernelStack:        4264 kB' 'PageTables:         3276 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19412 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.581    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.581    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:51.582    23:37:22	-- setup/common.sh@33 -- # echo 0
00:03:51.582    23:37:22	-- setup/common.sh@33 -- # return 0
00:03:51.582   23:37:22	-- setup/hugepages.sh@97 -- # anon=0
00:03:51.582    23:37:22	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:51.582    23:37:22	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:51.582    23:37:22	-- setup/common.sh@18 -- # local node=
00:03:51.582    23:37:22	-- setup/common.sh@19 -- # local var val
00:03:51.582    23:37:22	-- setup/common.sh@20 -- # local mem_f mem
00:03:51.582    23:37:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:51.582    23:37:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:51.582    23:37:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:51.582    23:37:22	-- setup/common.sh@28 -- # mapfile -t mem
00:03:51.582    23:37:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582     23:37:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6101140 kB' 'MemAvailable:   10537092 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699924 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138752 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157432 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258488 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64140 kB' 'KernelStack:        4304 kB' 'PageTables:         3356 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19396 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.582    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.582    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.583    23:37:22	-- setup/common.sh@33 -- # echo 0
00:03:51.583    23:37:22	-- setup/common.sh@33 -- # return 0
00:03:51.583   23:37:22	-- setup/hugepages.sh@99 -- # surp=0
00:03:51.583    23:37:22	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:51.583    23:37:22	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:51.583    23:37:22	-- setup/common.sh@18 -- # local node=
00:03:51.583    23:37:22	-- setup/common.sh@19 -- # local var val
00:03:51.583    23:37:22	-- setup/common.sh@20 -- # local mem_f mem
00:03:51.583    23:37:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:51.583    23:37:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:51.583    23:37:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:51.583    23:37:22	-- setup/common.sh@28 -- # mapfile -t mem
00:03:51.583    23:37:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583     23:37:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6101392 kB' 'MemAvailable:   10537344 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3700004 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138832 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157464 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258464 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64116 kB' 'KernelStack:        4320 kB' 'PageTables:         3400 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     498544 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19412 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.583    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.583    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.584    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.584    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:51.584    23:37:22	-- setup/common.sh@33 -- # echo 0
00:03:51.845    23:37:22	-- setup/common.sh@33 -- # return 0
00:03:51.845   23:37:22	-- setup/hugepages.sh@100 -- # resv=0
00:03:51.845   23:37:22	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:03:51.845  nr_hugepages=512
00:03:51.845   23:37:22	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:51.845  resv_hugepages=0
00:03:51.845   23:37:22	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:51.845  surplus_hugepages=0
00:03:51.845   23:37:22	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:51.845  anon_hugepages=0
00:03:51.845   23:37:22	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:03:51.845   23:37:22	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:03:51.845    23:37:22	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:51.845    23:37:22	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:51.845    23:37:22	-- setup/common.sh@18 -- # local node=
00:03:51.845    23:37:22	-- setup/common.sh@19 -- # local var val
00:03:51.845    23:37:22	-- setup/common.sh@20 -- # local mem_f mem
00:03:51.845    23:37:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:51.845    23:37:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:51.845    23:37:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:51.845    23:37:22	-- setup/common.sh@28 -- # mapfile -t mem
00:03:51.845    23:37:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845     23:37:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6101392 kB' 'MemAvailable:   10537344 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699988 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138816 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157464 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258464 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64116 kB' 'KernelStack:        4336 kB' 'PageTables:         3436 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597200 kB' 'Committed_AS:     500444 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19396 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.845    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.845    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:51.846    23:37:22	-- setup/common.sh@33 -- # echo 512
00:03:51.846    23:37:22	-- setup/common.sh@33 -- # return 0
00:03:51.846   23:37:22	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:03:51.846   23:37:22	-- setup/hugepages.sh@112 -- # get_nodes
00:03:51.846   23:37:22	-- setup/hugepages.sh@27 -- # local node
00:03:51.846   23:37:22	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:51.846   23:37:22	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:03:51.846   23:37:22	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:51.846   23:37:22	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:51.846   23:37:22	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:51.846   23:37:22	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:51.846    23:37:22	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:51.846    23:37:22	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:51.846    23:37:22	-- setup/common.sh@18 -- # local node=0
00:03:51.846    23:37:22	-- setup/common.sh@19 -- # local var val
00:03:51.846    23:37:22	-- setup/common.sh@20 -- # local mem_f mem
00:03:51.846    23:37:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:51.846    23:37:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:51.846    23:37:22	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:51.846    23:37:22	-- setup/common.sh@28 -- # mapfile -t mem
00:03:51.846    23:37:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846     23:37:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         6101392 kB' 'MemUsed:         6141588 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699988 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138816 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'FilePages:       4574568 kB' 'Mapped:            67496 kB' 'AnonPages:        157204 kB' 'Shmem:              2596 kB' 'KernelStack:        4404 kB' 'PageTables:         3696 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194348 kB' 'Slab:             258464 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64116 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.846    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.846    23:37:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # continue
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # IFS=': '
00:03:51.847    23:37:22	-- setup/common.sh@31 -- # read -r var val _
00:03:51.847    23:37:22	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:51.847    23:37:22	-- setup/common.sh@33 -- # echo 0
00:03:51.847    23:37:22	-- setup/common.sh@33 -- # return 0
00:03:51.847   23:37:22	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:51.847   23:37:22	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:51.847   23:37:22	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:51.847   23:37:22	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:51.847   23:37:22	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:03:51.847  node0=512 expecting 512
00:03:51.847   23:37:22	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:03:51.847  
00:03:51.847  real	0m0.937s
00:03:51.847  user	0m0.258s
00:03:51.847  sys	0m0.620s
00:03:51.847   23:37:22	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:51.847   23:37:22	-- common/autotest_common.sh@10 -- # set +x
00:03:51.847  ************************************
00:03:51.847  END TEST custom_alloc
00:03:51.847  ************************************
00:03:51.847   23:37:22	-- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc
00:03:51.847   23:37:22	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:51.847   23:37:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:51.847   23:37:22	-- common/autotest_common.sh@10 -- # set +x
00:03:51.847  ************************************
00:03:51.847  START TEST no_shrink_alloc
00:03:51.847  ************************************
00:03:51.847   23:37:22	-- common/autotest_common.sh@1114 -- # no_shrink_alloc
00:03:51.847   23:37:22	-- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0
00:03:51.847   23:37:22	-- setup/hugepages.sh@49 -- # local size=2097152
00:03:51.847   23:37:22	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:03:51.847   23:37:22	-- setup/hugepages.sh@51 -- # shift
00:03:51.847   23:37:22	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:03:51.847   23:37:22	-- setup/hugepages.sh@52 -- # local node_ids
00:03:51.847   23:37:22	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:03:51.847   23:37:22	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:03:51.847   23:37:22	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:03:51.847   23:37:22	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:03:51.847   23:37:22	-- setup/hugepages.sh@62 -- # local user_nodes
00:03:51.847   23:37:22	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:03:51.847   23:37:22	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:03:51.847   23:37:22	-- setup/hugepages.sh@67 -- # nodes_test=()
00:03:51.847   23:37:22	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:03:51.847   23:37:22	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:03:51.847   23:37:22	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:03:51.847   23:37:22	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:03:51.847   23:37:22	-- setup/hugepages.sh@73 -- # return 0
00:03:51.847   23:37:22	-- setup/hugepages.sh@198 -- # setup output
00:03:51.847   23:37:22	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:51.847   23:37:22	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:52.106  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:52.106  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:53.045   23:37:23	-- setup/hugepages.sh@199 -- # verify_nr_hugepages
00:03:53.045   23:37:23	-- setup/hugepages.sh@89 -- # local node
00:03:53.045   23:37:23	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:53.045   23:37:23	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:53.045   23:37:23	-- setup/hugepages.sh@92 -- # local surp
00:03:53.045   23:37:23	-- setup/hugepages.sh@93 -- # local resv
00:03:53.045   23:37:23	-- setup/hugepages.sh@94 -- # local anon
00:03:53.045   23:37:23	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:53.045    23:37:23	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:53.045    23:37:23	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:53.045    23:37:23	-- setup/common.sh@18 -- # local node=
00:03:53.045    23:37:23	-- setup/common.sh@19 -- # local var val
00:03:53.045    23:37:23	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.045    23:37:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.045    23:37:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.045    23:37:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.045    23:37:23	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.045    23:37:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.045    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.045     23:37:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5051588 kB' 'MemAvailable:    9487540 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002820 kB' 'Inactive:        3700236 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   139064 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        158036 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258864 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64516 kB' 'KernelStack:        4336 kB' 'PageTables:         3424 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19396 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.045    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.045    23:37:23	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.045    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.045    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.045    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.045    23:37:23	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.045    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.045    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.046    23:37:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.046    23:37:23	-- setup/common.sh@33 -- # echo 0
00:03:53.046    23:37:23	-- setup/common.sh@33 -- # return 0
00:03:53.046   23:37:23	-- setup/hugepages.sh@97 -- # anon=0
00:03:53.046    23:37:23	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:53.046    23:37:23	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:53.046    23:37:23	-- setup/common.sh@18 -- # local node=
00:03:53.046    23:37:23	-- setup/common.sh@19 -- # local var val
00:03:53.046    23:37:23	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.046    23:37:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.046    23:37:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.046    23:37:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.046    23:37:23	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.046    23:37:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.046    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047     23:37:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5052376 kB' 'MemAvailable:    9488328 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002820 kB' 'Inactive:        3699924 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   138752 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157452 kB' 'Mapped:            67196 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258864 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64516 kB' 'KernelStack:        4368 kB' 'PageTables:         3504 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19396 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.047    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.047    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.048    23:37:23	-- setup/common.sh@33 -- # echo 0
00:03:53.048    23:37:23	-- setup/common.sh@33 -- # return 0
00:03:53.048   23:37:23	-- setup/hugepages.sh@99 -- # surp=0
00:03:53.048    23:37:23	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:53.048    23:37:23	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:53.048    23:37:23	-- setup/common.sh@18 -- # local node=
00:03:53.048    23:37:23	-- setup/common.sh@19 -- # local var val
00:03:53.048    23:37:23	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.048    23:37:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.048    23:37:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.048    23:37:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.048    23:37:23	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.048    23:37:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048     23:37:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5052596 kB' 'MemAvailable:    9488548 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699932 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138760 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157416 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258728 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64380 kB' 'KernelStack:        4304 kB' 'PageTables:         3348 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19412 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.048    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.048    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.049    23:37:23	-- setup/common.sh@33 -- # echo 0
00:03:53.049    23:37:23	-- setup/common.sh@33 -- # return 0
00:03:53.049   23:37:23	-- setup/hugepages.sh@100 -- # resv=0
00:03:53.049   23:37:23	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:03:53.049  nr_hugepages=1024
00:03:53.049   23:37:23	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:53.049  resv_hugepages=0
00:03:53.049   23:37:23	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:53.049  surplus_hugepages=0
00:03:53.049   23:37:23	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:53.049  anon_hugepages=0
00:03:53.049   23:37:23	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:53.049   23:37:23	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:03:53.049    23:37:23	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:53.049    23:37:23	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:53.049    23:37:23	-- setup/common.sh@18 -- # local node=
00:03:53.049    23:37:23	-- setup/common.sh@19 -- # local var val
00:03:53.049    23:37:23	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.049    23:37:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.049    23:37:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.049    23:37:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.049    23:37:23	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.049    23:37:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049     23:37:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5052620 kB' 'MemAvailable:    9488572 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699940 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138768 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157404 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258728 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64380 kB' 'KernelStack:        4320 kB' 'PageTables:         3400 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19412 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.049    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.049    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.050    23:37:23	-- setup/common.sh@33 -- # echo 1024
00:03:53.050    23:37:23	-- setup/common.sh@33 -- # return 0
00:03:53.050   23:37:23	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:53.050   23:37:23	-- setup/hugepages.sh@112 -- # get_nodes
00:03:53.050   23:37:23	-- setup/hugepages.sh@27 -- # local node
00:03:53.050   23:37:23	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:53.050   23:37:23	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:03:53.050   23:37:23	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:53.050   23:37:23	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:53.050   23:37:23	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:53.050   23:37:23	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:53.050    23:37:23	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:53.050    23:37:23	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:53.050    23:37:23	-- setup/common.sh@18 -- # local node=0
00:03:53.050    23:37:23	-- setup/common.sh@19 -- # local var val
00:03:53.050    23:37:23	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.050    23:37:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.050    23:37:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:53.050    23:37:23	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:53.050    23:37:23	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.050    23:37:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050     23:37:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5052348 kB' 'MemUsed:         7190632 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3700036 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138864 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'FilePages:       4574568 kB' 'Mapped:            67236 kB' 'AnonPages:        157512 kB' 'Shmem:              2596 kB' 'KernelStack:        4336 kB' 'PageTables:         3452 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194348 kB' 'Slab:             258584 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64236 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.050    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.050    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.051    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.051    23:37:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.051    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.051    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.310    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.310    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # continue
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # IFS=': '
00:03:53.311    23:37:23	-- setup/common.sh@31 -- # read -r var val _
00:03:53.311    23:37:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.311    23:37:23	-- setup/common.sh@33 -- # echo 0
00:03:53.311    23:37:23	-- setup/common.sh@33 -- # return 0
00:03:53.311   23:37:23	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:53.311   23:37:23	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:53.311   23:37:23	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:53.311   23:37:23	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:53.311   23:37:23	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:03:53.311  node0=1024 expecting 1024
00:03:53.311   23:37:23	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:03:53.311   23:37:23	-- setup/hugepages.sh@202 -- # CLEAR_HUGE=no
00:03:53.311   23:37:23	-- setup/hugepages.sh@202 -- # NRHUGE=512
00:03:53.311   23:37:23	-- setup/hugepages.sh@202 -- # setup output
00:03:53.311   23:37:23	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:53.311   23:37:23	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:53.571  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:03:53.571  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:03:53.571  INFO: Requested 512 hugepages but 1024 already allocated on node0
00:03:53.571   23:37:24	-- setup/hugepages.sh@204 -- # verify_nr_hugepages
00:03:53.571   23:37:24	-- setup/hugepages.sh@89 -- # local node
00:03:53.571   23:37:24	-- setup/hugepages.sh@90 -- # local sorted_t
00:03:53.571   23:37:24	-- setup/hugepages.sh@91 -- # local sorted_s
00:03:53.571   23:37:24	-- setup/hugepages.sh@92 -- # local surp
00:03:53.571   23:37:24	-- setup/hugepages.sh@93 -- # local resv
00:03:53.571   23:37:24	-- setup/hugepages.sh@94 -- # local anon
00:03:53.571   23:37:24	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:03:53.571    23:37:24	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:03:53.571    23:37:24	-- setup/common.sh@17 -- # local get=AnonHugePages
00:03:53.571    23:37:24	-- setup/common.sh@18 -- # local node=
00:03:53.571    23:37:24	-- setup/common.sh@19 -- # local var val
00:03:53.571    23:37:24	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.571    23:37:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.571    23:37:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.571    23:37:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.571    23:37:24	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.571    23:37:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571     23:37:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5053200 kB' 'MemAvailable:    9489152 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002820 kB' 'Inactive:        3700280 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   139108 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157764 kB' 'Mapped:            67196 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258880 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64532 kB' 'KernelStack:        4336 kB' 'PageTables:         3644 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19476 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.571    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.571    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:03:53.572    23:37:24	-- setup/common.sh@33 -- # echo 0
00:03:53.572    23:37:24	-- setup/common.sh@33 -- # return 0
00:03:53.572   23:37:24	-- setup/hugepages.sh@97 -- # anon=0
00:03:53.572    23:37:24	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:03:53.572    23:37:24	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:53.572    23:37:24	-- setup/common.sh@18 -- # local node=
00:03:53.572    23:37:24	-- setup/common.sh@19 -- # local var val
00:03:53.572    23:37:24	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.572    23:37:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.572    23:37:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.572    23:37:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.572    23:37:24	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.572    23:37:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572     23:37:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5053956 kB' 'MemAvailable:    9489908 kB' 'Buffers:           35208 kB' 'Cached:          4539360 kB' 'SwapCached:            0 kB' 'Active:          1002820 kB' 'Inactive:        3699836 kB' 'Active(anon):       1084 kB' 'Inactive(anon):   138664 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561172 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157320 kB' 'Mapped:            67196 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258880 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64532 kB' 'KernelStack:        4324 kB' 'PageTables:         3548 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498676 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19460 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.572    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.572    23:37:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.573    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.573    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.574    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.574    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.574    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.574    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.574    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.574    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.574    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.574    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.574    23:37:24	-- setup/common.sh@33 -- # echo 0
00:03:53.574    23:37:24	-- setup/common.sh@33 -- # return 0
00:03:53.574   23:37:24	-- setup/hugepages.sh@99 -- # surp=0
00:03:53.574    23:37:24	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:03:53.574    23:37:24	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:03:53.574    23:37:24	-- setup/common.sh@18 -- # local node=
00:03:53.574    23:37:24	-- setup/common.sh@19 -- # local var val
00:03:53.574    23:37:24	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.574    23:37:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.574    23:37:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.574    23:37:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.574    23:37:24	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.835    23:37:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835     23:37:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5054180 kB' 'MemAvailable:    9490136 kB' 'Buffers:           35208 kB' 'Cached:          4539364 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3699984 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138808 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561176 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        157456 kB' 'Mapped:            67236 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258880 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64532 kB' 'KernelStack:        4304 kB' 'PageTables:         3344 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     498652 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19460 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.835    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.835    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.836    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.836    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:03:53.836    23:37:24	-- setup/common.sh@33 -- # echo 0
00:03:53.836    23:37:24	-- setup/common.sh@33 -- # return 0
00:03:53.836   23:37:24	-- setup/hugepages.sh@100 -- # resv=0
00:03:53.836   23:37:24	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:03:53.836  nr_hugepages=1024
00:03:53.836   23:37:24	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:03:53.836  resv_hugepages=0
00:03:53.836   23:37:24	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:03:53.836  surplus_hugepages=0
00:03:53.836   23:37:24	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:03:53.836  anon_hugepages=0
00:03:53.836   23:37:24	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:53.836   23:37:24	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:03:53.836    23:37:24	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:03:53.836    23:37:24	-- setup/common.sh@17 -- # local get=HugePages_Total
00:03:53.836    23:37:24	-- setup/common.sh@18 -- # local node=
00:03:53.836    23:37:24	-- setup/common.sh@19 -- # local var val
00:03:53.837    23:37:24	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.837    23:37:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.837    23:37:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:03:53.837    23:37:24	-- setup/common.sh@25 -- # [[ -n '' ]]
00:03:53.837    23:37:24	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.837    23:37:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837     23:37:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5054180 kB' 'MemAvailable:    9490136 kB' 'Buffers:           35208 kB' 'Cached:          4539364 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3700496 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   139320 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561176 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'AnonPages:        158116 kB' 'Mapped:            67496 kB' 'Shmem:              2596 kB' 'KReclaimable:     194348 kB' 'Slab:             258880 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64532 kB' 'KernelStack:        4384 kB' 'PageTables:         3540 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072912 kB' 'Committed_AS:     501612 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19460 kB' 'VmallocChunk:          0 kB' 'Percpu:             8160 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      135020 kB' 'DirectMap2M:     4059136 kB' 'DirectMap1G:    10485760 kB'
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.837    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.837    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:03:53.838    23:37:24	-- setup/common.sh@33 -- # echo 1024
00:03:53.838    23:37:24	-- setup/common.sh@33 -- # return 0
00:03:53.838   23:37:24	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:03:53.838   23:37:24	-- setup/hugepages.sh@112 -- # get_nodes
00:03:53.838   23:37:24	-- setup/hugepages.sh@27 -- # local node
00:03:53.838   23:37:24	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:03:53.838   23:37:24	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:03:53.838   23:37:24	-- setup/hugepages.sh@32 -- # no_nodes=1
00:03:53.838   23:37:24	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:03:53.838   23:37:24	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:03:53.838   23:37:24	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:03:53.838    23:37:24	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:03:53.838    23:37:24	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:03:53.838    23:37:24	-- setup/common.sh@18 -- # local node=0
00:03:53.838    23:37:24	-- setup/common.sh@19 -- # local var val
00:03:53.838    23:37:24	-- setup/common.sh@20 -- # local mem_f mem
00:03:53.838    23:37:24	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:03:53.838    23:37:24	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:03:53.838    23:37:24	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:03:53.838    23:37:24	-- setup/common.sh@28 -- # mapfile -t mem
00:03:53.838    23:37:24	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838     23:37:24	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242980 kB' 'MemFree:         5054180 kB' 'MemUsed:         7188800 kB' 'SwapCached:            0 kB' 'Active:          1002812 kB' 'Inactive:        3700012 kB' 'Active(anon):       1076 kB' 'Inactive(anon):   138836 kB' 'Active(file):    1001736 kB' 'Inactive(file):  3561176 kB' 'Unevictable:       29172 kB' 'Mlocked:           27636 kB' 'Dirty:               808 kB' 'Writeback:             0 kB' 'FilePages:       4574572 kB' 'Mapped:            67236 kB' 'AnonPages:        157532 kB' 'Shmem:              2596 kB' 'KernelStack:        4384 kB' 'PageTables:         3572 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     194348 kB' 'Slab:             258880 kB' 'SReclaimable:     194348 kB' 'SUnreclaim:        64532 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.838    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.838    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # continue
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # IFS=': '
00:03:53.839    23:37:24	-- setup/common.sh@31 -- # read -r var val _
00:03:53.839    23:37:24	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:03:53.839    23:37:24	-- setup/common.sh@33 -- # echo 0
00:03:53.839    23:37:24	-- setup/common.sh@33 -- # return 0
00:03:53.839   23:37:24	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:03:53.839   23:37:24	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:03:53.839   23:37:24	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:03:53.839   23:37:24	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:03:53.839   23:37:24	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:03:53.839  node0=1024 expecting 1024
00:03:53.839   23:37:24	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:03:53.839  
00:03:53.839  real	0m2.017s
00:03:53.839  user	0m0.626s
00:03:53.839  sys	0m1.245s
00:03:53.839   23:37:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:53.839   23:37:24	-- common/autotest_common.sh@10 -- # set +x
00:03:53.839  ************************************
00:03:53.839  END TEST no_shrink_alloc
00:03:53.839  ************************************
00:03:53.839   23:37:24	-- setup/hugepages.sh@217 -- # clear_hp
00:03:53.839   23:37:24	-- setup/hugepages.sh@37 -- # local node hp
00:03:53.839   23:37:24	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:03:53.839   23:37:24	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:03:53.839   23:37:24	-- setup/hugepages.sh@41 -- # echo 0
00:03:53.839   23:37:24	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:03:53.839   23:37:24	-- setup/hugepages.sh@41 -- # echo 0
00:03:53.839   23:37:24	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:03:53.839   23:37:24	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:03:53.839  ************************************
00:03:53.839  END TEST hugepages
00:03:53.839  ************************************
00:03:53.839  
00:03:53.839  real	0m7.922s
00:03:53.839  user	0m2.521s
00:03:53.839  sys	0m5.218s
00:03:53.839   23:37:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:53.839   23:37:24	-- common/autotest_common.sh@10 -- # set +x
00:03:54.098   23:37:24	-- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:03:54.098   23:37:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:54.098   23:37:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:54.098   23:37:24	-- common/autotest_common.sh@10 -- # set +x
00:03:54.098  ************************************
00:03:54.098  START TEST driver
00:03:54.098  ************************************
00:03:54.098   23:37:24	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:03:54.098  * Looking for test storage...
00:03:54.098  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:03:54.098     23:37:24	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:54.098      23:37:24	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:54.098      23:37:24	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:54.098     23:37:24	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:54.098     23:37:24	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:54.098     23:37:24	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:54.098     23:37:24	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:54.098     23:37:24	-- scripts/common.sh@335 -- # IFS=.-:
00:03:54.098     23:37:24	-- scripts/common.sh@335 -- # read -ra ver1
00:03:54.098     23:37:24	-- scripts/common.sh@336 -- # IFS=.-:
00:03:54.098     23:37:24	-- scripts/common.sh@336 -- # read -ra ver2
00:03:54.098     23:37:24	-- scripts/common.sh@337 -- # local 'op=<'
00:03:54.098     23:37:24	-- scripts/common.sh@339 -- # ver1_l=2
00:03:54.098     23:37:24	-- scripts/common.sh@340 -- # ver2_l=1
00:03:54.098     23:37:24	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:54.098     23:37:24	-- scripts/common.sh@343 -- # case "$op" in
00:03:54.098     23:37:24	-- scripts/common.sh@344 -- # : 1
00:03:54.098     23:37:24	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:54.098     23:37:24	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:54.098      23:37:24	-- scripts/common.sh@364 -- # decimal 1
00:03:54.098      23:37:24	-- scripts/common.sh@352 -- # local d=1
00:03:54.098      23:37:24	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:54.098      23:37:24	-- scripts/common.sh@354 -- # echo 1
00:03:54.098     23:37:24	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:54.098      23:37:24	-- scripts/common.sh@365 -- # decimal 2
00:03:54.098      23:37:24	-- scripts/common.sh@352 -- # local d=2
00:03:54.098      23:37:24	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:54.098      23:37:24	-- scripts/common.sh@354 -- # echo 2
00:03:54.098     23:37:24	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:54.098     23:37:24	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:54.098     23:37:24	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:54.098     23:37:24	-- scripts/common.sh@367 -- # return 0
00:03:54.098     23:37:24	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:54.098     23:37:24	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:54.098  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.098  		--rc genhtml_branch_coverage=1
00:03:54.098  		--rc genhtml_function_coverage=1
00:03:54.098  		--rc genhtml_legend=1
00:03:54.098  		--rc geninfo_all_blocks=1
00:03:54.098  		--rc geninfo_unexecuted_blocks=1
00:03:54.098  		
00:03:54.098  		'
00:03:54.098     23:37:24	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:54.098  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.098  		--rc genhtml_branch_coverage=1
00:03:54.098  		--rc genhtml_function_coverage=1
00:03:54.098  		--rc genhtml_legend=1
00:03:54.098  		--rc geninfo_all_blocks=1
00:03:54.098  		--rc geninfo_unexecuted_blocks=1
00:03:54.098  		
00:03:54.098  		'
00:03:54.098     23:37:24	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:54.098  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.098  		--rc genhtml_branch_coverage=1
00:03:54.098  		--rc genhtml_function_coverage=1
00:03:54.098  		--rc genhtml_legend=1
00:03:54.098  		--rc geninfo_all_blocks=1
00:03:54.098  		--rc geninfo_unexecuted_blocks=1
00:03:54.098  		
00:03:54.098  		'
00:03:54.098     23:37:24	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:54.098  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.098  		--rc genhtml_branch_coverage=1
00:03:54.098  		--rc genhtml_function_coverage=1
00:03:54.098  		--rc genhtml_legend=1
00:03:54.098  		--rc geninfo_all_blocks=1
00:03:54.098  		--rc geninfo_unexecuted_blocks=1
00:03:54.098  		
00:03:54.098  		'
00:03:54.098   23:37:24	-- setup/driver.sh@68 -- # setup reset
00:03:54.098   23:37:24	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:54.098   23:37:24	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:54.666   23:37:25	-- setup/driver.sh@69 -- # run_test guess_driver guess_driver
00:03:54.666   23:37:25	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:54.666   23:37:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:54.666   23:37:25	-- common/autotest_common.sh@10 -- # set +x
00:03:54.666  ************************************
00:03:54.666  START TEST guess_driver
00:03:54.666  ************************************
00:03:54.666   23:37:25	-- common/autotest_common.sh@1114 -- # guess_driver
00:03:54.666   23:37:25	-- setup/driver.sh@46 -- # local driver setup_driver marker
00:03:54.666   23:37:25	-- setup/driver.sh@47 -- # local fail=0
00:03:54.666    23:37:25	-- setup/driver.sh@49 -- # pick_driver
00:03:54.666    23:37:25	-- setup/driver.sh@36 -- # vfio
00:03:54.666    23:37:25	-- setup/driver.sh@21 -- # local iommu_grups
00:03:54.666    23:37:25	-- setup/driver.sh@22 -- # local unsafe_vfio
00:03:54.666    23:37:25	-- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]]
00:03:54.666    23:37:25	-- setup/driver.sh@25 -- # unsafe_vfio=N
00:03:54.666    23:37:25	-- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*)
00:03:54.666    23:37:25	-- setup/driver.sh@29 -- # (( 0 > 0 ))
00:03:54.666    23:37:25	-- setup/driver.sh@29 -- # [[ N == Y ]]
00:03:54.666    23:37:25	-- setup/driver.sh@32 -- # return 1
00:03:54.666    23:37:25	-- setup/driver.sh@38 -- # uio
00:03:54.666    23:37:25	-- setup/driver.sh@17 -- # is_driver uio_pci_generic
00:03:54.666    23:37:25	-- setup/driver.sh@14 -- # mod uio_pci_generic
00:03:54.666     23:37:25	-- setup/driver.sh@12 -- # dep uio_pci_generic
00:03:54.666     23:37:25	-- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic
00:03:54.666    23:37:25	-- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 
00:03:54.666  insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko  == *\.\k\o* ]]
00:03:54.666    23:37:25	-- setup/driver.sh@39 -- # echo uio_pci_generic
00:03:54.666  Looking for driver=uio_pci_generic
00:03:54.666   23:37:25	-- setup/driver.sh@49 -- # driver=uio_pci_generic
00:03:54.666   23:37:25	-- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]]
00:03:54.666   23:37:25	-- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic'
00:03:54.666   23:37:25	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:03:54.666    23:37:25	-- setup/driver.sh@45 -- # setup output config
00:03:54.666    23:37:25	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:54.666    23:37:25	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:03:54.925   23:37:25	-- setup/driver.sh@58 -- # [[ devices: == \-\> ]]
00:03:54.925   23:37:25	-- setup/driver.sh@58 -- # continue
00:03:54.925   23:37:25	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:03:55.183   23:37:25	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:03:55.183   23:37:25	-- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]]
00:03:55.183   23:37:25	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:03:56.559   23:37:27	-- setup/driver.sh@64 -- # (( fail == 0 ))
00:03:56.559   23:37:27	-- setup/driver.sh@65 -- # setup reset
00:03:56.559   23:37:27	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:56.559   23:37:27	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:57.126  
00:03:57.126  real	0m2.342s
00:03:57.126  user	0m0.471s
00:03:57.126  sys	0m1.860s
00:03:57.126   23:37:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:57.126  ************************************
00:03:57.126  END TEST guess_driver
00:03:57.126  ************************************
00:03:57.126   23:37:27	-- common/autotest_common.sh@10 -- # set +x
00:03:57.126  
00:03:57.126  real	0m3.051s
00:03:57.126  user	0m0.859s
00:03:57.126  sys	0m2.204s
00:03:57.126   23:37:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:57.126   23:37:27	-- common/autotest_common.sh@10 -- # set +x
00:03:57.126  ************************************
00:03:57.126  END TEST driver
00:03:57.126  ************************************
00:03:57.126   23:37:27	-- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:03:57.126   23:37:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:57.126   23:37:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:57.126   23:37:27	-- common/autotest_common.sh@10 -- # set +x
00:03:57.126  ************************************
00:03:57.126  START TEST devices
00:03:57.126  ************************************
00:03:57.126   23:37:27	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:03:57.126  * Looking for test storage...
00:03:57.126  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:03:57.126     23:37:27	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:57.126      23:37:27	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:57.126      23:37:27	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:57.126     23:37:27	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:57.126     23:37:27	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:57.126     23:37:27	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:57.126     23:37:27	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:57.126     23:37:27	-- scripts/common.sh@335 -- # IFS=.-:
00:03:57.126     23:37:27	-- scripts/common.sh@335 -- # read -ra ver1
00:03:57.126     23:37:27	-- scripts/common.sh@336 -- # IFS=.-:
00:03:57.126     23:37:27	-- scripts/common.sh@336 -- # read -ra ver2
00:03:57.416     23:37:27	-- scripts/common.sh@337 -- # local 'op=<'
00:03:57.416     23:37:27	-- scripts/common.sh@339 -- # ver1_l=2
00:03:57.416     23:37:27	-- scripts/common.sh@340 -- # ver2_l=1
00:03:57.416     23:37:27	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:57.416     23:37:27	-- scripts/common.sh@343 -- # case "$op" in
00:03:57.416     23:37:27	-- scripts/common.sh@344 -- # : 1
00:03:57.416     23:37:27	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:57.416     23:37:27	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:57.416      23:37:27	-- scripts/common.sh@364 -- # decimal 1
00:03:57.416      23:37:27	-- scripts/common.sh@352 -- # local d=1
00:03:57.416      23:37:27	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:57.416      23:37:27	-- scripts/common.sh@354 -- # echo 1
00:03:57.416     23:37:27	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:57.416      23:37:27	-- scripts/common.sh@365 -- # decimal 2
00:03:57.416      23:37:27	-- scripts/common.sh@352 -- # local d=2
00:03:57.416      23:37:27	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:57.416      23:37:27	-- scripts/common.sh@354 -- # echo 2
00:03:57.416     23:37:27	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:57.416     23:37:27	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:57.416     23:37:27	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:57.416     23:37:27	-- scripts/common.sh@367 -- # return 0
00:03:57.416     23:37:27	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:57.416     23:37:27	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:57.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.416  		--rc genhtml_branch_coverage=1
00:03:57.416  		--rc genhtml_function_coverage=1
00:03:57.416  		--rc genhtml_legend=1
00:03:57.416  		--rc geninfo_all_blocks=1
00:03:57.416  		--rc geninfo_unexecuted_blocks=1
00:03:57.416  		
00:03:57.416  		'
00:03:57.416     23:37:27	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:57.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.416  		--rc genhtml_branch_coverage=1
00:03:57.416  		--rc genhtml_function_coverage=1
00:03:57.416  		--rc genhtml_legend=1
00:03:57.416  		--rc geninfo_all_blocks=1
00:03:57.416  		--rc geninfo_unexecuted_blocks=1
00:03:57.416  		
00:03:57.416  		'
00:03:57.416     23:37:27	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:57.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.416  		--rc genhtml_branch_coverage=1
00:03:57.416  		--rc genhtml_function_coverage=1
00:03:57.416  		--rc genhtml_legend=1
00:03:57.416  		--rc geninfo_all_blocks=1
00:03:57.416  		--rc geninfo_unexecuted_blocks=1
00:03:57.416  		
00:03:57.416  		'
00:03:57.416     23:37:27	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:57.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.417  		--rc genhtml_branch_coverage=1
00:03:57.417  		--rc genhtml_function_coverage=1
00:03:57.417  		--rc genhtml_legend=1
00:03:57.417  		--rc geninfo_all_blocks=1
00:03:57.417  		--rc geninfo_unexecuted_blocks=1
00:03:57.417  		
00:03:57.417  		'
00:03:57.417   23:37:27	-- setup/devices.sh@190 -- # trap cleanup EXIT
00:03:57.417   23:37:27	-- setup/devices.sh@192 -- # setup reset
00:03:57.417   23:37:27	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:57.417   23:37:27	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:57.675   23:37:28	-- setup/devices.sh@194 -- # get_zoned_devs
00:03:57.675   23:37:28	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:03:57.675   23:37:28	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:03:57.675   23:37:28	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:03:57.675   23:37:28	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:03:57.675   23:37:28	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:03:57.675   23:37:28	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:03:57.675   23:37:28	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:57.675   23:37:28	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:03:57.675   23:37:28	-- setup/devices.sh@196 -- # blocks=()
00:03:57.675   23:37:28	-- setup/devices.sh@196 -- # declare -a blocks
00:03:57.675   23:37:28	-- setup/devices.sh@197 -- # blocks_to_pci=()
00:03:57.675   23:37:28	-- setup/devices.sh@197 -- # declare -A blocks_to_pci
00:03:57.675   23:37:28	-- setup/devices.sh@198 -- # min_disk_size=3221225472
00:03:57.675   23:37:28	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:03:57.675   23:37:28	-- setup/devices.sh@201 -- # ctrl=nvme0n1
00:03:57.675   23:37:28	-- setup/devices.sh@201 -- # ctrl=nvme0
00:03:57.675   23:37:28	-- setup/devices.sh@202 -- # pci=0000:00:06.0
00:03:57.675   23:37:28	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:03:57.675   23:37:28	-- setup/devices.sh@204 -- # block_in_use nvme0n1
00:03:57.675   23:37:28	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:03:57.675   23:37:28	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:03:57.675  No valid GPT data, bailing
00:03:57.675    23:37:28	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:57.675   23:37:28	-- scripts/common.sh@393 -- # pt=
00:03:57.675   23:37:28	-- scripts/common.sh@394 -- # return 1
00:03:57.675    23:37:28	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1
00:03:57.675    23:37:28	-- setup/common.sh@76 -- # local dev=nvme0n1
00:03:57.675    23:37:28	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:03:57.675    23:37:28	-- setup/common.sh@80 -- # echo 5368709120
00:03:57.675   23:37:28	-- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size ))
00:03:57.675   23:37:28	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:03:57.675   23:37:28	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0
00:03:57.675   23:37:28	-- setup/devices.sh@209 -- # (( 1 > 0 ))
00:03:57.934   23:37:28	-- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1
00:03:57.934   23:37:28	-- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount
00:03:57.934   23:37:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:57.934   23:37:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:57.934   23:37:28	-- common/autotest_common.sh@10 -- # set +x
00:03:57.935  ************************************
00:03:57.935  START TEST nvme_mount
00:03:57.935  ************************************
00:03:57.935   23:37:28	-- common/autotest_common.sh@1114 -- # nvme_mount
00:03:57.935   23:37:28	-- setup/devices.sh@95 -- # nvme_disk=nvme0n1
00:03:57.935   23:37:28	-- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1
00:03:57.935   23:37:28	-- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:03:57.935   23:37:28	-- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:03:57.935   23:37:28	-- setup/devices.sh@101 -- # partition_drive nvme0n1 1
00:03:57.935   23:37:28	-- setup/common.sh@39 -- # local disk=nvme0n1
00:03:57.935   23:37:28	-- setup/common.sh@40 -- # local part_no=1
00:03:57.935   23:37:28	-- setup/common.sh@41 -- # local size=1073741824
00:03:57.935   23:37:28	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:03:57.935   23:37:28	-- setup/common.sh@44 -- # parts=()
00:03:57.935   23:37:28	-- setup/common.sh@44 -- # local parts
00:03:57.935   23:37:28	-- setup/common.sh@46 -- # (( part = 1 ))
00:03:57.935   23:37:28	-- setup/common.sh@46 -- # (( part <= part_no ))
00:03:57.935   23:37:28	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:03:57.935   23:37:28	-- setup/common.sh@46 -- # (( part++ ))
00:03:57.935   23:37:28	-- setup/common.sh@46 -- # (( part <= part_no ))
00:03:57.935   23:37:28	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:03:57.935   23:37:28	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:03:57.935   23:37:28	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1
00:03:58.871  Creating new GPT entries in memory.
00:03:58.871  GPT data structures destroyed! You may now partition the disk using fdisk or
00:03:58.871  other utilities.
00:03:58.871   23:37:29	-- setup/common.sh@57 -- # (( part = 1 ))
00:03:58.871   23:37:29	-- setup/common.sh@57 -- # (( part <= part_no ))
00:03:58.871   23:37:29	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:03:58.871   23:37:29	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:03:58.871   23:37:29	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:03:59.806  Creating new GPT entries in memory.
00:03:59.806  The operation has completed successfully.
00:03:59.806   23:37:30	-- setup/common.sh@57 -- # (( part++ ))
00:03:59.806   23:37:30	-- setup/common.sh@57 -- # (( part <= part_no ))
00:03:59.806   23:37:30	-- setup/common.sh@62 -- # wait 96689
00:03:59.806   23:37:30	-- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:03:59.806   23:37:30	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=
00:03:59.806   23:37:30	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:03:59.806   23:37:30	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]]
00:03:59.806   23:37:30	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1
00:03:59.806   23:37:30	-- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:00.065   23:37:30	-- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:00.065   23:37:30	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:00.065   23:37:30	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1
00:04:00.065   23:37:30	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:00.065   23:37:30	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:00.065   23:37:30	-- setup/devices.sh@53 -- # local found=0
00:04:00.065   23:37:30	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:00.065   23:37:30	-- setup/devices.sh@56 -- # :
00:04:00.065   23:37:30	-- setup/devices.sh@59 -- # local pci status
00:04:00.065   23:37:30	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:00.065    23:37:30	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:00.065    23:37:30	-- setup/devices.sh@47 -- # setup output config
00:04:00.065    23:37:30	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:00.065    23:37:30	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:00.065   23:37:30	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:00.065   23:37:30	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]]
00:04:00.065   23:37:30	-- setup/devices.sh@63 -- # found=1
00:04:00.065   23:37:30	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:00.065   23:37:30	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:00.065   23:37:30	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:00.324   23:37:30	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:00.324   23:37:30	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:01.702   23:37:32	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:01.702   23:37:32	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:04:01.702   23:37:32	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:01.702   23:37:32	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:01.702   23:37:32	-- setup/devices.sh@110 -- # cleanup_nvme
00:04:01.702   23:37:32	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:01.702   23:37:32	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:04:01.702  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:01.702   23:37:32	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:01.702   23:37:32	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:01.702  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:04:01.702  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:04:01.702  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:04:01.702  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:04:01.702   23:37:32	-- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M
00:04:01.702   23:37:32	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M
00:04:01.702   23:37:32	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]]
00:04:01.702   23:37:32	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M
00:04:01.702   23:37:32	-- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:01.702   23:37:32	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:01.702   23:37:32	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1
00:04:01.702   23:37:32	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:01.702   23:37:32	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:01.702   23:37:32	-- setup/devices.sh@53 -- # local found=0
00:04:01.702   23:37:32	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:01.702   23:37:32	-- setup/devices.sh@56 -- # :
00:04:01.702   23:37:32	-- setup/devices.sh@59 -- # local pci status
00:04:01.702   23:37:32	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:01.702    23:37:32	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:01.702    23:37:32	-- setup/devices.sh@47 -- # setup output config
00:04:01.702    23:37:32	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:01.702    23:37:32	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:01.960   23:37:32	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:01.960   23:37:32	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]]
00:04:01.961   23:37:32	-- setup/devices.sh@63 -- # found=1
00:04:01.961   23:37:32	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:01.961   23:37:32	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:01.961   23:37:32	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:02.219   23:37:32	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:02.219   23:37:32	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:03.596   23:37:34	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:03.596   23:37:34	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:04:03.596   23:37:34	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:03.596   23:37:34	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:03.596   23:37:34	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:04:03.596   23:37:34	-- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:03.596   23:37:34	-- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' ''
00:04:03.596   23:37:34	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:03.596   23:37:34	-- setup/devices.sh@49 -- # local mounts=data@nvme0n1
00:04:03.596   23:37:34	-- setup/devices.sh@50 -- # local mount_point=
00:04:03.596   23:37:34	-- setup/devices.sh@51 -- # local test_file=
00:04:03.596   23:37:34	-- setup/devices.sh@53 -- # local found=0
00:04:03.596   23:37:34	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:04:03.596   23:37:34	-- setup/devices.sh@59 -- # local pci status
00:04:03.596   23:37:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:03.596    23:37:34	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:03.596    23:37:34	-- setup/devices.sh@47 -- # setup output config
00:04:03.596    23:37:34	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:03.596    23:37:34	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:03.855   23:37:34	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:03.855   23:37:34	-- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]]
00:04:03.855   23:37:34	-- setup/devices.sh@63 -- # found=1
00:04:03.855   23:37:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:03.855   23:37:34	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:03.855   23:37:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:03.855   23:37:34	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:03.855   23:37:34	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:05.759   23:37:35	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:05.759   23:37:35	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:04:05.759   23:37:35	-- setup/devices.sh@68 -- # return 0
00:04:05.759   23:37:35	-- setup/devices.sh@128 -- # cleanup_nvme
00:04:05.759   23:37:35	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:05.759   23:37:35	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:05.759   23:37:35	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:05.759   23:37:35	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:05.759  /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:05.759  
00:04:05.759  real	0m7.587s
00:04:05.759  user	0m0.720s
00:04:05.759  sys	0m4.870s
00:04:05.759   23:37:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:05.759  ************************************
00:04:05.759  END TEST nvme_mount
00:04:05.759   23:37:36	-- common/autotest_common.sh@10 -- # set +x
00:04:05.759  ************************************
00:04:05.759   23:37:36	-- setup/devices.sh@214 -- # run_test dm_mount dm_mount
00:04:05.759   23:37:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:05.759   23:37:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:05.759   23:37:36	-- common/autotest_common.sh@10 -- # set +x
00:04:05.759  ************************************
00:04:05.759  START TEST dm_mount
00:04:05.759  ************************************
00:04:05.759   23:37:36	-- common/autotest_common.sh@1114 -- # dm_mount
00:04:05.759   23:37:36	-- setup/devices.sh@144 -- # pv=nvme0n1
00:04:05.759   23:37:36	-- setup/devices.sh@145 -- # pv0=nvme0n1p1
00:04:05.759   23:37:36	-- setup/devices.sh@146 -- # pv1=nvme0n1p2
00:04:05.759   23:37:36	-- setup/devices.sh@148 -- # partition_drive nvme0n1
00:04:05.759   23:37:36	-- setup/common.sh@39 -- # local disk=nvme0n1
00:04:05.759   23:37:36	-- setup/common.sh@40 -- # local part_no=2
00:04:05.759   23:37:36	-- setup/common.sh@41 -- # local size=1073741824
00:04:05.759   23:37:36	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:04:05.759   23:37:36	-- setup/common.sh@44 -- # parts=()
00:04:05.759   23:37:36	-- setup/common.sh@44 -- # local parts
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part = 1 ))
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:05.759   23:37:36	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part++ ))
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:05.759   23:37:36	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part++ ))
00:04:05.759   23:37:36	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:05.759   23:37:36	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:04:05.759   23:37:36	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:04:05.759   23:37:36	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2
00:04:06.696  Creating new GPT entries in memory.
00:04:06.696  GPT data structures destroyed! You may now partition the disk using fdisk or
00:04:06.696  other utilities.
00:04:06.696   23:37:37	-- setup/common.sh@57 -- # (( part = 1 ))
00:04:06.696   23:37:37	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:06.696   23:37:37	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:06.696   23:37:37	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:06.696   23:37:37	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:04:07.670  Creating new GPT entries in memory.
00:04:07.670  The operation has completed successfully.
00:04:07.670   23:37:38	-- setup/common.sh@57 -- # (( part++ ))
00:04:07.670   23:37:38	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:07.670   23:37:38	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:07.670   23:37:38	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:07.670   23:37:38	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335
00:04:08.607  The operation has completed successfully.
00:04:08.607   23:37:39	-- setup/common.sh@57 -- # (( part++ ))
00:04:08.607   23:37:39	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:08.607   23:37:39	-- setup/common.sh@62 -- # wait 97193
00:04:08.607   23:37:39	-- setup/devices.sh@150 -- # dm_name=nvme_dm_test
00:04:08.607   23:37:39	-- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:08.607   23:37:39	-- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:08.607   23:37:39	-- setup/devices.sh@155 -- # dmsetup create nvme_dm_test
00:04:08.607   23:37:39	-- setup/devices.sh@160 -- # for t in {1..5}
00:04:08.607   23:37:39	-- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:08.607   23:37:39	-- setup/devices.sh@161 -- # break
00:04:08.607   23:37:39	-- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:08.607    23:37:39	-- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test
00:04:08.607   23:37:39	-- setup/devices.sh@165 -- # dm=/dev/dm-0
00:04:08.607   23:37:39	-- setup/devices.sh@166 -- # dm=dm-0
00:04:08.607   23:37:39	-- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]]
00:04:08.607   23:37:39	-- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]]
00:04:08.607   23:37:39	-- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:08.607   23:37:39	-- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size=
00:04:08.607   23:37:39	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:08.607   23:37:39	-- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:04:08.607   23:37:39	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test
00:04:08.607   23:37:39	-- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:08.607   23:37:39	-- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:08.607   23:37:39	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:08.607   23:37:39	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test
00:04:08.607   23:37:39	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:08.607   23:37:39	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:08.607   23:37:39	-- setup/devices.sh@53 -- # local found=0
00:04:08.607   23:37:39	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:04:08.607   23:37:39	-- setup/devices.sh@56 -- # :
00:04:08.607   23:37:39	-- setup/devices.sh@59 -- # local pci status
00:04:08.607   23:37:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:08.607    23:37:39	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:08.607    23:37:39	-- setup/devices.sh@47 -- # setup output config
00:04:08.607    23:37:39	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:08.607    23:37:39	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:08.866   23:37:39	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:08.866   23:37:39	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]]
00:04:08.866   23:37:39	-- setup/devices.sh@63 -- # found=1
00:04:08.866   23:37:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:08.866   23:37:39	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:08.866   23:37:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:09.124   23:37:39	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:09.124   23:37:39	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:10.501   23:37:41	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:10.501   23:37:41	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]]
00:04:10.501   23:37:41	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:10.501   23:37:41	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:04:10.501   23:37:41	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:04:10.501   23:37:41	-- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:10.501   23:37:41	-- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' ''
00:04:10.501   23:37:41	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:04:10.501   23:37:41	-- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0
00:04:10.501   23:37:41	-- setup/devices.sh@50 -- # local mount_point=
00:04:10.501   23:37:41	-- setup/devices.sh@51 -- # local test_file=
00:04:10.501   23:37:41	-- setup/devices.sh@53 -- # local found=0
00:04:10.501   23:37:41	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:04:10.501   23:37:41	-- setup/devices.sh@59 -- # local pci status
00:04:10.501   23:37:41	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:10.501    23:37:41	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:04:10.501    23:37:41	-- setup/devices.sh@47 -- # setup output config
00:04:10.501    23:37:41	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:10.501    23:37:41	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:04:10.760   23:37:41	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:10.760   23:37:41	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]]
00:04:10.760   23:37:41	-- setup/devices.sh@63 -- # found=1
00:04:10.760   23:37:41	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:10.760   23:37:41	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:10.760   23:37:41	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:10.760   23:37:41	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:04:10.760   23:37:41	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:12.663   23:37:42	-- setup/devices.sh@66 -- # (( found == 1 ))
00:04:12.663   23:37:42	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:04:12.663   23:37:42	-- setup/devices.sh@68 -- # return 0
00:04:12.663   23:37:42	-- setup/devices.sh@187 -- # cleanup_dm
00:04:12.663   23:37:42	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:12.663   23:37:42	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:04:12.663   23:37:42	-- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test
00:04:12.663   23:37:42	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:12.663   23:37:42	-- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1
00:04:12.663  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:04:12.663   23:37:42	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:04:12.663   23:37:42	-- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2
00:04:12.663  
00:04:12.663  real	0m6.912s
00:04:12.663  user	0m0.526s
00:04:12.663  sys	0m3.285s
00:04:12.663   23:37:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:12.663   23:37:42	-- common/autotest_common.sh@10 -- # set +x
00:04:12.663  ************************************
00:04:12.663  END TEST dm_mount
00:04:12.663  ************************************
00:04:12.663   23:37:43	-- setup/devices.sh@1 -- # cleanup
00:04:12.663   23:37:43	-- setup/devices.sh@11 -- # cleanup_nvme
00:04:12.663   23:37:43	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:04:12.663   23:37:43	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:12.663   23:37:43	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:04:12.663   23:37:43	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:04:12.663   23:37:43	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:04:12.663  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:04:12.663  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:04:12.663  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:04:12.663  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:04:12.663   23:37:43	-- setup/devices.sh@12 -- # cleanup_dm
00:04:12.663   23:37:43	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:04:12.663   23:37:43	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:04:12.663   23:37:43	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:04:12.663   23:37:43	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:04:12.663   23:37:43	-- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]]
00:04:12.663   23:37:43	-- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1
00:04:12.663  
00:04:12.663  real	0m15.412s
00:04:12.663  user	0m1.770s
00:04:12.663  sys	0m8.528s
00:04:12.663   23:37:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:12.663   23:37:43	-- common/autotest_common.sh@10 -- # set +x
00:04:12.663  ************************************
00:04:12.663  END TEST devices
00:04:12.663  ************************************
00:04:12.663  
00:04:12.663  real	0m32.051s
00:04:12.663  user	0m6.998s
00:04:12.663  sys	0m19.864s
00:04:12.663   23:37:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:12.663   23:37:43	-- common/autotest_common.sh@10 -- # set +x
00:04:12.663  ************************************
00:04:12.663  END TEST setup.sh
00:04:12.663  ************************************
00:04:12.663   23:37:43	-- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:04:12.663  Hugepages
00:04:12.663  node     hugesize     free /  total
00:04:12.663  node0   1048576kB        0 /      0
00:04:12.663  node0      2048kB     2048 /   2048
00:04:12.663  
00:04:12.663  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:12.923  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:04:12.923  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:04:12.923    23:37:43	-- spdk/autotest.sh@128 -- # uname -s
00:04:12.923   23:37:43	-- spdk/autotest.sh@128 -- # [[ Linux == Linux ]]
00:04:12.923   23:37:43	-- spdk/autotest.sh@130 -- # nvme_namespace_revert
00:04:12.923   23:37:43	-- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:13.490  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:04:13.490  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:14.866   23:37:45	-- common/autotest_common.sh@1527 -- # sleep 1
00:04:15.802   23:37:46	-- common/autotest_common.sh@1528 -- # bdfs=()
00:04:15.802   23:37:46	-- common/autotest_common.sh@1528 -- # local bdfs
00:04:15.802   23:37:46	-- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs))
00:04:15.802    23:37:46	-- common/autotest_common.sh@1529 -- # get_nvme_bdfs
00:04:15.802    23:37:46	-- common/autotest_common.sh@1508 -- # bdfs=()
00:04:15.802    23:37:46	-- common/autotest_common.sh@1508 -- # local bdfs
00:04:15.802    23:37:46	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:15.802     23:37:46	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:15.803     23:37:46	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:04:16.061    23:37:46	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:04:16.061    23:37:46	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:04:16.061   23:37:46	-- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:16.320  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:04:16.320  Waiting for block devices as requested
00:04:16.320  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:04:16.578   23:37:47	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:04:16.578    23:37:47	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0
00:04:16.578     23:37:47	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:04:16.578     23:37:47	-- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme
00:04:16.578    23:37:47	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:04:16.578    23:37:47	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]]
00:04:16.578     23:37:47	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:04:16.579    23:37:47	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:04:16.579   23:37:47	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0
00:04:16.579   23:37:47	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]]
00:04:16.579    23:37:47	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:16.579    23:37:47	-- common/autotest_common.sh@1540 -- # grep oacs
00:04:16.579    23:37:47	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:16.579   23:37:47	-- common/autotest_common.sh@1540 -- # oacs=' 0x12a'
00:04:16.579   23:37:47	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:04:16.579   23:37:47	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:04:16.579    23:37:47	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0
00:04:16.579    23:37:47	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:04:16.579    23:37:47	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:04:16.579   23:37:47	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:04:16.579   23:37:47	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:04:16.579   23:37:47	-- common/autotest_common.sh@1552 -- # continue
00:04:16.579   23:37:47	-- spdk/autotest.sh@133 -- # timing_exit pre_cleanup
00:04:16.579   23:37:47	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:16.579   23:37:47	-- common/autotest_common.sh@10 -- # set +x
00:04:16.579   23:37:47	-- spdk/autotest.sh@136 -- # timing_enter afterboot
00:04:16.579   23:37:47	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:16.579   23:37:47	-- common/autotest_common.sh@10 -- # set +x
00:04:16.579   23:37:47	-- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:16.837  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:04:17.096  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:04:18.506   23:37:49	-- spdk/autotest.sh@138 -- # timing_exit afterboot
00:04:18.506   23:37:49	-- common/autotest_common.sh@728 -- # xtrace_disable
00:04:18.506   23:37:49	-- common/autotest_common.sh@10 -- # set +x
00:04:18.506   23:37:49	-- spdk/autotest.sh@142 -- # opal_revert_cleanup
00:04:18.506   23:37:49	-- common/autotest_common.sh@1586 -- # mapfile -t bdfs
00:04:18.506    23:37:49	-- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54
00:04:18.506    23:37:49	-- common/autotest_common.sh@1572 -- # bdfs=()
00:04:18.506    23:37:49	-- common/autotest_common.sh@1572 -- # local bdfs
00:04:18.506     23:37:49	-- common/autotest_common.sh@1574 -- # get_nvme_bdfs
00:04:18.506     23:37:49	-- common/autotest_common.sh@1508 -- # bdfs=()
00:04:18.506     23:37:49	-- common/autotest_common.sh@1508 -- # local bdfs
00:04:18.506     23:37:49	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:18.506      23:37:49	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:18.506      23:37:49	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:04:18.506     23:37:49	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:04:18.506     23:37:49	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:04:18.506    23:37:49	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:04:18.506     23:37:49	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device
00:04:18.506    23:37:49	-- common/autotest_common.sh@1575 -- # device=0x0010
00:04:18.506    23:37:49	-- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:04:18.506    23:37:49	-- common/autotest_common.sh@1581 -- # printf '%s\n'
00:04:18.506   23:37:49	-- common/autotest_common.sh@1587 -- # [[ -z '' ]]
00:04:18.506   23:37:49	-- common/autotest_common.sh@1588 -- # return 0
00:04:18.506   23:37:49	-- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']'
00:04:18.506   23:37:49	-- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:04:18.506   23:37:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:18.506   23:37:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:18.506   23:37:49	-- common/autotest_common.sh@10 -- # set +x
00:04:18.766  ************************************
00:04:18.766  START TEST unittest
00:04:18.766  ************************************
00:04:18.766   23:37:49	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:04:18.766  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:04:18.766  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:04:18.766  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:04:18.766  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:04:18.766  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:04:18.766  + rootdir=/home/vagrant/spdk_repo/spdk
00:04:18.767  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:04:18.767  ++ rpc_py=rpc_cmd
00:04:18.767  ++ set -e
00:04:18.767  ++ shopt -s nullglob
00:04:18.767  ++ shopt -s extglob
00:04:18.767  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:04:18.767  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:04:18.767  +++ CONFIG_WPDK_DIR=
00:04:18.767  +++ CONFIG_ASAN=y
00:04:18.767  +++ CONFIG_VBDEV_COMPRESS=n
00:04:18.767  +++ CONFIG_HAVE_EXECINFO_H=y
00:04:18.767  +++ CONFIG_USDT=n
00:04:18.767  +++ CONFIG_CUSTOMOCF=n
00:04:18.767  +++ CONFIG_PREFIX=/usr/local
00:04:18.767  +++ CONFIG_RBD=n
00:04:18.767  +++ CONFIG_LIBDIR=
00:04:18.767  +++ CONFIG_IDXD=y
00:04:18.767  +++ CONFIG_NVME_CUSE=y
00:04:18.767  +++ CONFIG_SMA=n
00:04:18.767  +++ CONFIG_VTUNE=n
00:04:18.767  +++ CONFIG_TSAN=n
00:04:18.767  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:04:18.767  +++ CONFIG_VFIO_USER_DIR=
00:04:18.767  +++ CONFIG_PGO_CAPTURE=n
00:04:18.767  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:04:18.767  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:04:18.767  +++ CONFIG_LTO=n
00:04:18.767  +++ CONFIG_ISCSI_INITIATOR=y
00:04:18.767  +++ CONFIG_CET=n
00:04:18.767  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:04:18.767  +++ CONFIG_OCF_PATH=
00:04:18.767  +++ CONFIG_RDMA_SET_TOS=y
00:04:18.767  +++ CONFIG_HAVE_ARC4RANDOM=n
00:04:18.767  +++ CONFIG_HAVE_LIBARCHIVE=n
00:04:18.767  +++ CONFIG_UBLK=n
00:04:18.767  +++ CONFIG_ISAL_CRYPTO=y
00:04:18.767  +++ CONFIG_OPENSSL_PATH=
00:04:18.767  +++ CONFIG_OCF=n
00:04:18.767  +++ CONFIG_FUSE=n
00:04:18.767  +++ CONFIG_VTUNE_DIR=
00:04:18.767  +++ CONFIG_FUZZER_LIB=
00:04:18.767  +++ CONFIG_FUZZER=n
00:04:18.767  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:04:18.767  +++ CONFIG_CRYPTO=n
00:04:18.767  +++ CONFIG_PGO_USE=n
00:04:18.767  +++ CONFIG_VHOST=y
00:04:18.767  +++ CONFIG_DAOS=n
00:04:18.767  +++ CONFIG_DPDK_INC_DIR=
00:04:18.767  +++ CONFIG_DAOS_DIR=
00:04:18.767  +++ CONFIG_UNIT_TESTS=y
00:04:18.767  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:04:18.767  +++ CONFIG_VIRTIO=y
00:04:18.767  +++ CONFIG_COVERAGE=y
00:04:18.767  +++ CONFIG_RDMA=y
00:04:18.767  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:04:18.767  +++ CONFIG_URING_PATH=
00:04:18.767  +++ CONFIG_XNVME=n
00:04:18.767  +++ CONFIG_VFIO_USER=n
00:04:18.767  +++ CONFIG_ARCH=native
00:04:18.767  +++ CONFIG_URING_ZNS=n
00:04:18.767  +++ CONFIG_WERROR=y
00:04:18.767  +++ CONFIG_HAVE_LIBBSD=n
00:04:18.767  +++ CONFIG_UBSAN=y
00:04:18.767  +++ CONFIG_IPSEC_MB_DIR=
00:04:18.767  +++ CONFIG_GOLANG=n
00:04:18.767  +++ CONFIG_ISAL=y
00:04:18.767  +++ CONFIG_IDXD_KERNEL=n
00:04:18.767  +++ CONFIG_DPDK_LIB_DIR=
00:04:18.767  +++ CONFIG_RDMA_PROV=verbs
00:04:18.767  +++ CONFIG_APPS=y
00:04:18.767  +++ CONFIG_SHARED=n
00:04:18.767  +++ CONFIG_FC_PATH=
00:04:18.767  +++ CONFIG_DPDK_PKG_CONFIG=n
00:04:18.767  +++ CONFIG_FC=n
00:04:18.767  +++ CONFIG_AVAHI=n
00:04:18.767  +++ CONFIG_FIO_PLUGIN=y
00:04:18.767  +++ CONFIG_RAID5F=y
00:04:18.767  +++ CONFIG_EXAMPLES=y
00:04:18.767  +++ CONFIG_TESTS=y
00:04:18.767  +++ CONFIG_CRYPTO_MLX5=n
00:04:18.767  +++ CONFIG_MAX_LCORES=
00:04:18.767  +++ CONFIG_IPSEC_MB=n
00:04:18.767  +++ CONFIG_DEBUG=y
00:04:18.767  +++ CONFIG_DPDK_COMPRESSDEV=n
00:04:18.767  +++ CONFIG_CROSS_PREFIX=
00:04:18.767  +++ CONFIG_URING=n
00:04:18.767  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:04:18.767  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:04:18.767  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:04:18.767  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:04:18.767  +++ _root=/home/vagrant/spdk_repo/spdk
00:04:18.767  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:04:18.767  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:04:18.767  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:04:18.767  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:04:18.767  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:04:18.767  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:04:18.767  +++ VHOST_APP=("$_app_dir/vhost")
00:04:18.767  +++ DD_APP=("$_app_dir/spdk_dd")
00:04:18.767  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:04:18.767  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:04:18.767  +++ [[ #ifndef SPDK_CONFIG_H
00:04:18.767  #define SPDK_CONFIG_H
00:04:18.767  #define SPDK_CONFIG_APPS 1
00:04:18.767  #define SPDK_CONFIG_ARCH native
00:04:18.767  #define SPDK_CONFIG_ASAN 1
00:04:18.767  #undef SPDK_CONFIG_AVAHI
00:04:18.767  #undef SPDK_CONFIG_CET
00:04:18.767  #define SPDK_CONFIG_COVERAGE 1
00:04:18.767  #define SPDK_CONFIG_CROSS_PREFIX 
00:04:18.767  #undef SPDK_CONFIG_CRYPTO
00:04:18.767  #undef SPDK_CONFIG_CRYPTO_MLX5
00:04:18.767  #undef SPDK_CONFIG_CUSTOMOCF
00:04:18.767  #undef SPDK_CONFIG_DAOS
00:04:18.767  #define SPDK_CONFIG_DAOS_DIR 
00:04:18.767  #define SPDK_CONFIG_DEBUG 1
00:04:18.767  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:04:18.767  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:04:18.767  #define SPDK_CONFIG_DPDK_INC_DIR 
00:04:18.767  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:04:18.767  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:04:18.767  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:04:18.767  #define SPDK_CONFIG_EXAMPLES 1
00:04:18.767  #undef SPDK_CONFIG_FC
00:04:18.767  #define SPDK_CONFIG_FC_PATH 
00:04:18.767  #define SPDK_CONFIG_FIO_PLUGIN 1
00:04:18.767  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:04:18.767  #undef SPDK_CONFIG_FUSE
00:04:18.767  #undef SPDK_CONFIG_FUZZER
00:04:18.767  #define SPDK_CONFIG_FUZZER_LIB 
00:04:18.767  #undef SPDK_CONFIG_GOLANG
00:04:18.767  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:04:18.767  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:04:18.767  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:04:18.767  #undef SPDK_CONFIG_HAVE_LIBBSD
00:04:18.767  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:04:18.767  #define SPDK_CONFIG_IDXD 1
00:04:18.767  #undef SPDK_CONFIG_IDXD_KERNEL
00:04:18.767  #undef SPDK_CONFIG_IPSEC_MB
00:04:18.767  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:04:18.767  #define SPDK_CONFIG_ISAL 1
00:04:18.767  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:04:18.767  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:04:18.767  #define SPDK_CONFIG_LIBDIR 
00:04:18.767  #undef SPDK_CONFIG_LTO
00:04:18.767  #define SPDK_CONFIG_MAX_LCORES 
00:04:18.767  #define SPDK_CONFIG_NVME_CUSE 1
00:04:18.767  #undef SPDK_CONFIG_OCF
00:04:18.767  #define SPDK_CONFIG_OCF_PATH 
00:04:18.767  #define SPDK_CONFIG_OPENSSL_PATH 
00:04:18.767  #undef SPDK_CONFIG_PGO_CAPTURE
00:04:18.767  #undef SPDK_CONFIG_PGO_USE
00:04:18.767  #define SPDK_CONFIG_PREFIX /usr/local
00:04:18.767  #define SPDK_CONFIG_RAID5F 1
00:04:18.767  #undef SPDK_CONFIG_RBD
00:04:18.767  #define SPDK_CONFIG_RDMA 1
00:04:18.767  #define SPDK_CONFIG_RDMA_PROV verbs
00:04:18.767  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:04:18.767  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:04:18.767  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:04:18.767  #undef SPDK_CONFIG_SHARED
00:04:18.767  #undef SPDK_CONFIG_SMA
00:04:18.767  #define SPDK_CONFIG_TESTS 1
00:04:18.767  #undef SPDK_CONFIG_TSAN
00:04:18.767  #undef SPDK_CONFIG_UBLK
00:04:18.767  #define SPDK_CONFIG_UBSAN 1
00:04:18.767  #define SPDK_CONFIG_UNIT_TESTS 1
00:04:18.767  #undef SPDK_CONFIG_URING
00:04:18.767  #define SPDK_CONFIG_URING_PATH 
00:04:18.767  #undef SPDK_CONFIG_URING_ZNS
00:04:18.767  #undef SPDK_CONFIG_USDT
00:04:18.767  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:04:18.767  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:04:18.767  #undef SPDK_CONFIG_VFIO_USER
00:04:18.767  #define SPDK_CONFIG_VFIO_USER_DIR 
00:04:18.767  #define SPDK_CONFIG_VHOST 1
00:04:18.767  #define SPDK_CONFIG_VIRTIO 1
00:04:18.767  #undef SPDK_CONFIG_VTUNE
00:04:18.767  #define SPDK_CONFIG_VTUNE_DIR 
00:04:18.767  #define SPDK_CONFIG_WERROR 1
00:04:18.767  #define SPDK_CONFIG_WPDK_DIR 
00:04:18.767  #undef SPDK_CONFIG_XNVME
00:04:18.767  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:04:18.767  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:04:18.767  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:18.767  +++ [[ -e /bin/wpdk_common.sh ]]
00:04:18.767  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:18.767  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:18.767  ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:18.767  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:18.767  ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:18.767  ++++ export PATH
00:04:18.767  ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:18.767  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:04:18.767  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:04:18.767  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:04:18.767  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:04:18.767  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:04:18.767  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:04:18.767  +++ TEST_TAG=N/A
00:04:18.767  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:04:18.767  ++ : 1
00:04:18.767  ++ export RUN_NIGHTLY
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_RUN_VALGRIND
00:04:18.767  ++ : 1
00:04:18.767  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:04:18.767  ++ : 1
00:04:18.767  ++ export SPDK_TEST_UNITTEST
00:04:18.767  ++ :
00:04:18.767  ++ export SPDK_TEST_AUTOBUILD
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_TEST_RELEASE_BUILD
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_TEST_ISAL
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_TEST_ISCSI
00:04:18.767  ++ : 0
00:04:18.767  ++ export SPDK_TEST_ISCSI_INITIATOR
00:04:18.767  ++ : 1
00:04:18.767  ++ export SPDK_TEST_NVME
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVME_PMR
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVME_BP
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVME_CLI
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVME_CUSE
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVME_FDP
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVMF
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VFIOUSER
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VFIOUSER_QEMU
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_FUZZER
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_FUZZER_SHORT
00:04:18.768  ++ : rdma
00:04:18.768  ++ export SPDK_TEST_NVMF_TRANSPORT
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_RBD
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VHOST
00:04:18.768  ++ : 1
00:04:18.768  ++ export SPDK_TEST_BLOCKDEV
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_IOAT
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_BLOBFS
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VHOST_INIT
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_LVOL
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VBDEV_COMPRESS
00:04:18.768  ++ : 1
00:04:18.768  ++ export SPDK_RUN_ASAN
00:04:18.768  ++ : 1
00:04:18.768  ++ export SPDK_RUN_UBSAN
00:04:18.768  ++ :
00:04:18.768  ++ export SPDK_RUN_EXTERNAL_DPDK
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_RUN_NON_ROOT
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_CRYPTO
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_FTL
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_OCF
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_VMD
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_OPAL
00:04:18.768  ++ :
00:04:18.768  ++ export SPDK_TEST_NATIVE_DPDK
00:04:18.768  ++ : true
00:04:18.768  ++ export SPDK_AUTOTEST_X
00:04:18.768  ++ : 1
00:04:18.768  ++ export SPDK_TEST_RAID5
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_URING
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_USDT
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_USE_IGB_UIO
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_SCHEDULER
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_SCANBUILD
00:04:18.768  ++ :
00:04:18.768  ++ export SPDK_TEST_NVMF_NICS
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_SMA
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_DAOS
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_XNVME
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_ACCEL_DSA
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_ACCEL_IAA
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_ACCEL_IOAT
00:04:18.768  ++ :
00:04:18.768  ++ export SPDK_TEST_FUZZER_TARGET
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_TEST_NVMF_MDNS
00:04:18.768  ++ : 0
00:04:18.768  ++ export SPDK_JSONRPC_GO_CLIENT
00:04:18.768  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:04:18.768  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:04:18.768  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:04:18.768  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:04:18.768  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:04:18.768  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:04:18.768  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:04:18.768  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:04:18.768  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:04:18.768  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:04:18.768  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:04:18.768  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:04:18.768  ++ export PYTHONDONTWRITEBYTECODE=1
00:04:18.768  ++ PYTHONDONTWRITEBYTECODE=1
00:04:18.768  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:04:18.768  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:04:18.768  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:04:18.768  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:04:18.768  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:04:18.768  ++ rm -rf /var/tmp/asan_suppression_file
00:04:18.768  ++ cat
00:04:18.768  ++ echo leak:libfuse3.so
00:04:18.768  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:04:18.768  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:04:18.768  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:04:18.768  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:04:18.768  ++ '[' -z /var/spdk/dependencies ']'
00:04:18.768  ++ export DEPENDENCY_DIR
00:04:18.768  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:04:18.768  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:04:18.768  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:04:18.768  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:04:18.768  ++ export QEMU_BIN=
00:04:18.768  ++ QEMU_BIN=
00:04:18.768  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:04:18.768  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:04:18.768  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:04:18.768  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:04:18.768  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:04:18.768  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:04:18.768  ++ _LCOV_MAIN=0
00:04:18.768  ++ _LCOV_LLVM=1
00:04:18.768  ++ _LCOV=
00:04:18.768  ++ [[ '' == *clang* ]]
00:04:18.768  ++ [[ 0 -eq 1 ]]
00:04:18.768  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:04:18.768  ++ _lcov_opt[_LCOV_MAIN]=
00:04:18.768  ++ lcov_opt=
00:04:18.768  ++ '[' 0 -eq 0 ']'
00:04:18.768  ++ export valgrind=
00:04:18.768  ++ valgrind=
00:04:18.768  +++ uname -s
00:04:18.768  ++ '[' Linux = Linux ']'
00:04:18.768  ++ HUGEMEM=4096
00:04:18.768  ++ export CLEAR_HUGE=yes
00:04:18.768  ++ CLEAR_HUGE=yes
00:04:18.768  ++ [[ 0 -eq 1 ]]
00:04:18.768  ++ [[ 0 -eq 1 ]]
00:04:18.768  ++ MAKE=make
00:04:18.768  +++ nproc
00:04:18.768  ++ MAKEFLAGS=-j10
00:04:18.768  ++ export HUGEMEM=4096
00:04:18.768  ++ HUGEMEM=4096
00:04:18.768  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:04:18.768  ++ NO_HUGE=()
00:04:18.768  ++ TEST_MODE=
00:04:18.768  ++ [[ -z '' ]]
00:04:18.768  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:04:18.768  ++ exec
00:04:18.768  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:04:18.768  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:04:18.768  ++ set_test_storage 2147483648
00:04:18.768  ++ [[ -v testdir ]]
00:04:18.768  ++ local requested_size=2147483648
00:04:18.768  ++ local mount target_dir
00:04:18.768  ++ local -A mounts fss sizes avails uses
00:04:18.768  ++ local source fs size avail mount use
00:04:18.768  ++ local storage_fallback storage_candidates
00:04:18.768  +++ mktemp -udt spdk.XXXXXX
00:04:18.768  ++ storage_fallback=/tmp/spdk.kxO6w0
00:04:18.768  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:04:18.768  ++ [[ -n '' ]]
00:04:18.768  ++ [[ -n '' ]]
00:04:18.768  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.kxO6w0/tests/unit /tmp/spdk.kxO6w0
00:04:18.768  ++ requested_size=2214592512
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  +++ df -T
00:04:18.768  +++ grep -v Filesystem
00:04:18.768  ++ mounts["$mount"]=tmpfs
00:04:18.768  ++ fss["$mount"]=tmpfs
00:04:18.768  ++ avails["$mount"]=1252601856
00:04:18.768  ++ sizes["$mount"]=1253683200
00:04:18.768  ++ uses["$mount"]=1081344
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=/dev/vda1
00:04:18.768  ++ fss["$mount"]=ext4
00:04:18.768  ++ avails["$mount"]=10461020160
00:04:18.768  ++ sizes["$mount"]=20616794112
00:04:18.768  ++ uses["$mount"]=10138996736
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=tmpfs
00:04:18.768  ++ fss["$mount"]=tmpfs
00:04:18.768  ++ avails["$mount"]=6268403712
00:04:18.768  ++ sizes["$mount"]=6268403712
00:04:18.768  ++ uses["$mount"]=0
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=tmpfs
00:04:18.768  ++ fss["$mount"]=tmpfs
00:04:18.768  ++ avails["$mount"]=5242880
00:04:18.768  ++ sizes["$mount"]=5242880
00:04:18.768  ++ uses["$mount"]=0
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=/dev/vda15
00:04:18.768  ++ fss["$mount"]=vfat
00:04:18.768  ++ avails["$mount"]=103061504
00:04:18.768  ++ sizes["$mount"]=109395968
00:04:18.768  ++ uses["$mount"]=6334464
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=tmpfs
00:04:18.768  ++ fss["$mount"]=tmpfs
00:04:18.768  ++ avails["$mount"]=1253675008
00:04:18.768  ++ sizes["$mount"]=1253679104
00:04:18.768  ++ uses["$mount"]=4096
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:04:18.768  ++ fss["$mount"]=fuse.sshfs
00:04:18.768  ++ avails["$mount"]=97958748160
00:04:18.768  ++ sizes["$mount"]=105088212992
00:04:18.768  ++ uses["$mount"]=1744031744
00:04:18.768  ++ read -r source fs size use avail _ mount
00:04:18.768  ++ printf '* Looking for test storage...\n'
00:04:18.768  * Looking for test storage...
00:04:18.768  ++ local target_space new_size
00:04:18.768  ++ for target_dir in "${storage_candidates[@]}"
00:04:18.768  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:04:18.768  +++ awk '$1 !~ /Filesystem/{print $6}'
00:04:18.768  ++ mount=/
00:04:18.768  ++ target_space=10461020160
00:04:18.768  ++ (( target_space == 0 || target_space < requested_size ))
00:04:18.768  ++ (( target_space >= requested_size ))
00:04:18.768  ++ [[ ext4 == tmpfs ]]
00:04:18.769  ++ [[ ext4 == ramfs ]]
00:04:18.769  ++ [[ / == / ]]
00:04:18.769  ++ new_size=12353589248
00:04:18.769  ++ (( new_size * 100 / sizes[/] > 95 ))
00:04:18.769  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:04:18.769  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:04:18.769  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:04:18.769  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:04:18.769  ++ return 0
00:04:18.769  ++ set -o errtrace
00:04:18.769  ++ shopt -s extdebug
00:04:18.769  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:04:18.769  ++ PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:04:18.769    23:37:49	-- common/autotest_common.sh@1682 -- # true
00:04:18.769    23:37:49	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:04:18.769    23:37:49	-- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:04:18.769    23:37:49	-- common/autotest_common.sh@29 -- # exec
00:04:18.769    23:37:49	-- common/autotest_common.sh@31 -- # xtrace_restore
00:04:18.769    23:37:49	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:04:18.769    23:37:49	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:04:18.769    23:37:49	-- common/autotest_common.sh@18 -- # set -x
00:04:18.769    23:37:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:18.769     23:37:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:18.769     23:37:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:18.769    23:37:49	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:18.769    23:37:49	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:18.769    23:37:49	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:18.769    23:37:49	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:18.769    23:37:49	-- scripts/common.sh@335 -- # IFS=.-:
00:04:18.769    23:37:49	-- scripts/common.sh@335 -- # read -ra ver1
00:04:18.769    23:37:49	-- scripts/common.sh@336 -- # IFS=.-:
00:04:18.769    23:37:49	-- scripts/common.sh@336 -- # read -ra ver2
00:04:18.769    23:37:49	-- scripts/common.sh@337 -- # local 'op=<'
00:04:18.769    23:37:49	-- scripts/common.sh@339 -- # ver1_l=2
00:04:18.769    23:37:49	-- scripts/common.sh@340 -- # ver2_l=1
00:04:18.769    23:37:49	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:18.769    23:37:49	-- scripts/common.sh@343 -- # case "$op" in
00:04:18.769    23:37:49	-- scripts/common.sh@344 -- # : 1
00:04:18.769    23:37:49	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:18.769    23:37:49	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:18.769     23:37:49	-- scripts/common.sh@364 -- # decimal 1
00:04:18.769     23:37:49	-- scripts/common.sh@352 -- # local d=1
00:04:18.769     23:37:49	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:18.769     23:37:49	-- scripts/common.sh@354 -- # echo 1
00:04:18.769    23:37:49	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:18.769     23:37:49	-- scripts/common.sh@365 -- # decimal 2
00:04:18.769     23:37:49	-- scripts/common.sh@352 -- # local d=2
00:04:18.769     23:37:49	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:18.769     23:37:49	-- scripts/common.sh@354 -- # echo 2
00:04:18.769    23:37:49	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:18.769    23:37:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:18.769    23:37:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:18.769    23:37:49	-- scripts/common.sh@367 -- # return 0
00:04:18.769    23:37:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:18.769    23:37:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:18.769  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.769  		--rc genhtml_branch_coverage=1
00:04:18.769  		--rc genhtml_function_coverage=1
00:04:18.769  		--rc genhtml_legend=1
00:04:18.769  		--rc geninfo_all_blocks=1
00:04:18.769  		--rc geninfo_unexecuted_blocks=1
00:04:18.769  		
00:04:18.769  		'
00:04:18.769    23:37:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:18.769  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.769  		--rc genhtml_branch_coverage=1
00:04:18.769  		--rc genhtml_function_coverage=1
00:04:18.769  		--rc genhtml_legend=1
00:04:18.769  		--rc geninfo_all_blocks=1
00:04:18.769  		--rc geninfo_unexecuted_blocks=1
00:04:18.769  		
00:04:18.769  		'
00:04:18.769    23:37:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:18.769  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.769  		--rc genhtml_branch_coverage=1
00:04:18.769  		--rc genhtml_function_coverage=1
00:04:18.769  		--rc genhtml_legend=1
00:04:18.769  		--rc geninfo_all_blocks=1
00:04:18.769  		--rc geninfo_unexecuted_blocks=1
00:04:18.769  		
00:04:18.769  		'
00:04:18.769    23:37:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:18.769  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.769  		--rc genhtml_branch_coverage=1
00:04:18.769  		--rc genhtml_function_coverage=1
00:04:18.769  		--rc genhtml_legend=1
00:04:18.769  		--rc geninfo_all_blocks=1
00:04:18.769  		--rc geninfo_unexecuted_blocks=1
00:04:18.769  		
00:04:18.769  		'
00:04:18.769   23:37:49	-- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:04:18.769   23:37:49	-- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']'
00:04:18.769   23:37:49	-- unit/unittest.sh@158 -- # '[' -z x ']'
00:04:18.769   23:37:49	-- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']'
00:04:18.769   23:37:49	-- unit/unittest.sh@174 -- # [[ y == y ]]
00:04:18.769   23:37:49	-- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:04:18.769   23:37:49	-- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:04:18.769   23:37:49	-- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:04:33.649  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:04:33.649  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:04:33.649  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:04:33.649  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:04:33.649  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:04:33.649  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:05:00.251    23:38:29	-- unit/unittest.sh@182 -- # uname -m
00:05:00.251   23:38:29	-- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']'
00:05:00.251   23:38:29	-- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:05:00.251   23:38:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:00.251   23:38:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:00.251   23:38:29	-- common/autotest_common.sh@10 -- # set +x
00:05:00.251  ************************************
00:05:00.251  START TEST unittest_pci_event
00:05:00.251  ************************************
00:05:00.251   23:38:29	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:05:00.251  
00:05:00.251  
00:05:00.251       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.251       http://cunit.sourceforge.net/
00:05:00.251  
00:05:00.251  
00:05:00.251  Suite: pci_event
00:05:00.251    Test: test_pci_parse_event ...[2024-12-13 23:38:29.084798] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000
00:05:00.251  [2024-12-13 23:38:29.085679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000
00:05:00.251  passed
00:05:00.251  
00:05:00.251  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.251                suites      1      1    n/a      0        0
00:05:00.251                 tests      1      1      1      0        0
00:05:00.251               asserts     15     15     15      0      n/a
00:05:00.251  
00:05:00.251  Elapsed time =    0.001 seconds
00:05:00.251  
00:05:00.251  real	0m0.036s
00:05:00.251  user	0m0.024s
00:05:00.251  sys	0m0.009s
00:05:00.251   23:38:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:00.251   23:38:29	-- common/autotest_common.sh@10 -- # set +x
00:05:00.251  ************************************
00:05:00.251  END TEST unittest_pci_event
00:05:00.251  ************************************
00:05:00.251   23:38:29	-- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:05:00.251   23:38:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:00.251   23:38:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:00.251   23:38:29	-- common/autotest_common.sh@10 -- # set +x
00:05:00.251  ************************************
00:05:00.251  START TEST unittest_include
00:05:00.251  ************************************
00:05:00.251   23:38:29	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:05:00.251  
00:05:00.251  
00:05:00.251       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.251       http://cunit.sourceforge.net/
00:05:00.251  
00:05:00.251  
00:05:00.251  Suite: histogram
00:05:00.251    Test: histogram_test ...passed
00:05:00.251    Test: histogram_merge ...passed
00:05:00.251  
00:05:00.251  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.251                suites      1      1    n/a      0        0
00:05:00.251                 tests      2      2      2      0        0
00:05:00.251               asserts     50     50     50      0      n/a
00:05:00.251  
00:05:00.251  Elapsed time =    0.006 seconds
00:05:00.251  
00:05:00.251  real	0m0.037s
00:05:00.251  user	0m0.023s
00:05:00.251  sys	0m0.013s
00:05:00.251   23:38:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:00.251   23:38:29	-- common/autotest_common.sh@10 -- # set +x
00:05:00.251  ************************************
00:05:00.251  END TEST unittest_include
00:05:00.251  ************************************
00:05:00.251   23:38:29	-- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev
00:05:00.251   23:38:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:00.251   23:38:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:00.251   23:38:29	-- common/autotest_common.sh@10 -- # set +x
00:05:00.251  ************************************
00:05:00.251  START TEST unittest_bdev
00:05:00.251  ************************************
00:05:00.251   23:38:29	-- common/autotest_common.sh@1114 -- # unittest_bdev
00:05:00.251   23:38:29	-- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:05:00.251  
00:05:00.251  
00:05:00.251       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.251       http://cunit.sourceforge.net/
00:05:00.251  
00:05:00.251  
00:05:00.251  Suite: bdev
00:05:00.251    Test: bytes_to_blocks_test ...passed
00:05:00.251    Test: num_blocks_test ...passed
00:05:00.251    Test: io_valid_test ...passed
00:05:00.251    Test: open_write_test ...[2024-12-13 23:38:29.345477] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:05:00.251  [2024-12-13 23:38:29.345963] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:05:00.251  [2024-12-13 23:38:29.346309] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:05:00.251  passed
00:05:00.251    Test: claim_test ...passed
00:05:00.251    Test: alias_add_del_test ...[2024-12-13 23:38:29.439821] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:05:00.251  [2024-12-13 23:38:29.440117] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:05:00.251  [2024-12-13 23:38:29.440315] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:05:00.251  passed
00:05:00.251    Test: get_device_stat_test ...passed
00:05:00.251    Test: bdev_io_types_test ...passed
00:05:00.251    Test: bdev_io_wait_test ...passed
00:05:00.251    Test: bdev_io_spans_split_test ...passed
00:05:00.251    Test: bdev_io_boundary_split_test ...passed
00:05:00.251    Test: bdev_io_max_size_and_segment_split_test ...[2024-12-13 23:38:29.611153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:05:00.251  passed
00:05:00.251    Test: bdev_io_mix_split_test ...passed
00:05:00.251    Test: bdev_io_split_with_io_wait ...passed
00:05:00.251    Test: bdev_io_write_unit_split_test ...[2024-12-13 23:38:29.702122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:05:00.251  [2024-12-13 23:38:29.702384] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:05:00.251  [2024-12-13 23:38:29.702471] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:05:00.252  [2024-12-13 23:38:29.702632] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:05:00.252  passed
00:05:00.252    Test: bdev_io_alignment_with_boundary ...passed
00:05:00.252    Test: bdev_io_alignment ...passed
00:05:00.252    Test: bdev_histograms ...passed
00:05:00.252    Test: bdev_write_zeroes ...passed
00:05:00.252    Test: bdev_compare_and_write ...passed
00:05:00.252    Test: bdev_compare ...passed
00:05:00.252    Test: bdev_compare_emulated ...passed
00:05:00.252    Test: bdev_zcopy_write ...passed
00:05:00.252    Test: bdev_zcopy_read ...passed
00:05:00.252    Test: bdev_open_while_hotremove ...passed
00:05:00.252    Test: bdev_close_while_hotremove ...passed
00:05:00.252    Test: bdev_open_ext_test ...[2024-12-13 23:38:30.045016] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:05:00.252  passed
00:05:00.252    Test: bdev_open_ext_unregister ...[2024-12-13 23:38:30.045499] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:05:00.252  passed
00:05:00.252    Test: bdev_set_io_timeout ...passed
00:05:00.252    Test: bdev_set_qd_sampling ...passed
00:05:00.252    Test: lba_range_overlap ...passed
00:05:00.252    Test: lock_lba_range_check_ranges ...passed
00:05:00.252    Test: lock_lba_range_with_io_outstanding ...passed
00:05:00.252    Test: lock_lba_range_overlapped ...passed
00:05:00.252    Test: bdev_quiesce ...[2024-12-13 23:38:30.207004] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:05:00.252  passed
00:05:00.252    Test: bdev_io_abort ...passed
00:05:00.252    Test: bdev_unmap ...passed
00:05:00.252    Test: bdev_write_zeroes_split_test ...passed
00:05:00.252    Test: bdev_set_options_test ...passed[2024-12-13 23:38:30.309783] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:05:00.252  
00:05:00.252    Test: bdev_get_memory_domains ...passed
00:05:00.252    Test: bdev_io_ext ...passed
00:05:00.252    Test: bdev_io_ext_no_opts ...passed
00:05:00.252    Test: bdev_io_ext_invalid_opts ...passed
00:05:00.252    Test: bdev_io_ext_split ...passed
00:05:00.252    Test: bdev_io_ext_bounce_buffer ...passed
00:05:00.252    Test: bdev_register_uuid_alias ...[2024-12-13 23:38:30.479086] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 1d596abc-4d3a-4137-a25f-9bed9a62bb0a already exists
00:05:00.252  [2024-12-13 23:38:30.479328] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:1d596abc-4d3a-4137-a25f-9bed9a62bb0a alias for bdev bdev0
00:05:00.252  passed
00:05:00.252    Test: bdev_unregister_by_name ...[2024-12-13 23:38:30.495211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:05:00.252  [2024-12-13 23:38:30.495419] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:05:00.252  passed
00:05:00.252    Test: for_each_bdev_test ...passed
00:05:00.252    Test: bdev_seek_test ...passed
00:05:00.252    Test: bdev_copy ...passed
00:05:00.252    Test: bdev_copy_split_test ...passed
00:05:00.252    Test: examine_locks ...passed
00:05:00.252    Test: claim_v2_rwo ...[2024-12-13 23:38:30.587131] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.587355] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.587478] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.587633] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.587744] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.587883] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:05:00.252  passed
00:05:00.252    Test: claim_v2_rom ...[2024-12-13 23:38:30.588310] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.588488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.588615] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.588777] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.588954] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:05:00.252  [2024-12-13 23:38:30.589121] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:05:00.252  passed
00:05:00.252    Test: claim_v2_rwm ...[2024-12-13 23:38:30.589538] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:05:00.252  [2024-12-13 23:38:30.589741] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.589889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.590077] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.590135] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.590256] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:05:00.252  passed[2024-12-13 23:38:30.590369] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:05:00.252  
00:05:00.252    Test: claim_v2_existing_writer ...[2024-12-13 23:38:30.590760] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:05:00.252  [2024-12-13 23:38:30.590945] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:05:00.252  passed
00:05:00.252    Test: claim_v2_existing_v1 ...[2024-12-13 23:38:30.591303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.591437] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.591494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:05:00.252  passed
00:05:00.252    Test: claim_v1_existing_v2 ...[2024-12-13 23:38:30.591894] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.592047] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:05:00.252  [2024-12-13 23:38:30.592197] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:05:00.252  passed
00:05:00.252    Test: examine_claimed ...[2024-12-13 23:38:30.592594] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:05:00.252  passed
00:05:00.252  
00:05:00.252  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.252                suites      1      1    n/a      0        0
00:05:00.252                 tests     59     59     59      0        0
00:05:00.252               asserts   4599   4599   4599      0      n/a
00:05:00.252  
00:05:00.252  Elapsed time =    1.313 seconds
00:05:00.252   23:38:30	-- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:05:00.252  
00:05:00.252  
00:05:00.252       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.252       http://cunit.sourceforge.net/
00:05:00.252  
00:05:00.252  
00:05:00.252  Suite: nvme
00:05:00.252    Test: test_create_ctrlr ...passed
00:05:00.252    Test: test_reset_ctrlr ...[2024-12-13 23:38:30.635830] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  passed
00:05:00.252    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:05:00.252    Test: test_failover_ctrlr ...passed
00:05:00.252    Test: test_race_between_failover_and_add_secondary_trid ...[2024-12-13 23:38:30.639231] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  [2024-12-13 23:38:30.639602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  [2024-12-13 23:38:30.639969] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  passed
00:05:00.252    Test: test_pending_reset ...[2024-12-13 23:38:30.641879] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  [2024-12-13 23:38:30.642383] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  passed
00:05:00.252    Test: test_attach_ctrlr ...[2024-12-13 23:38:30.643994] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:05:00.252  passed
00:05:00.252    Test: test_aer_cb ...passed
00:05:00.252    Test: test_submit_nvme_cmd ...passed
00:05:00.252    Test: test_add_remove_trid ...passed
00:05:00.252    Test: test_abort ...[2024-12-13 23:38:30.648472] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:05:00.252  passed
00:05:00.252    Test: test_get_io_qpair ...passed
00:05:00.252    Test: test_bdev_unregister ...passed
00:05:00.252    Test: test_compare_ns ...passed
00:05:00.252    Test: test_init_ana_log_page ...passed
00:05:00.252    Test: test_get_memory_domains ...passed
00:05:00.252    Test: test_reconnect_qpair ...[2024-12-13 23:38:30.652571] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.252  passed
00:05:00.252    Test: test_create_bdev_ctrlr ...[2024-12-13 23:38:30.653480] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated.
00:05:00.252  passed
00:05:00.252    Test: test_add_multi_ns_to_bdev ...[2024-12-13 23:38:30.655209] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:05:00.252  passed
00:05:00.252    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:05:00.252    Test: test_admin_path ...passed
00:05:00.253    Test: test_reset_bdev_ctrlr ...passed
00:05:00.253    Test: test_find_io_path ...passed
00:05:00.253    Test: test_retry_io_if_ana_state_is_updating ...passed
00:05:00.253    Test: test_retry_io_for_io_path_error ...passed
00:05:00.253    Test: test_retry_io_count ...passed
00:05:00.253    Test: test_concurrent_read_ana_log_page ...passed
00:05:00.253    Test: test_retry_io_for_ana_error ...passed
00:05:00.253    Test: test_check_io_error_resiliency_params ...[2024-12-13 23:38:30.665173] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:05:00.253  [2024-12-13 23:38:30.665386] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:05:00.253  [2024-12-13 23:38:30.665551] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:05:00.253  [2024-12-13 23:38:30.665747] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:05:00.253  [2024-12-13 23:38:30.665910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:05:00.253  [2024-12-13 23:38:30.666113] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:05:00.253  [2024-12-13 23:38:30.666416] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:05:00.253  [2024-12-13 23:38:30.666934] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:05:00.253  [2024-12-13 23:38:30.667331] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:05:00.253  passed
00:05:00.253    Test: test_retry_io_if_ctrlr_is_resetting ...passed
00:05:00.253    Test: test_reconnect_ctrlr ...[2024-12-13 23:38:30.670101] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.670592] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.671142] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.671624] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.672098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  passed
00:05:00.253    Test: test_retry_failover_ctrlr ...[2024-12-13 23:38:30.673257] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  passed
00:05:00.253    Test: test_fail_path ...[2024-12-13 23:38:30.674702] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.675207] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.675699] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.676138] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.676686] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  passed
00:05:00.253    Test: test_nvme_ns_cmp ...passed
00:05:00.253    Test: test_ana_transition ...passed
00:05:00.253    Test: test_set_preferred_path ...passed
00:05:00.253    Test: test_find_next_io_path ...passed
00:05:00.253    Test: test_find_io_path_min_qd ...passed
00:05:00.253    Test: test_disable_auto_failback ...[2024-12-13 23:38:30.681355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  passed
00:05:00.253    Test: test_set_multipath_policy ...passed
00:05:00.253    Test: test_uuid_generation ...passed
00:05:00.253    Test: test_retry_io_to_same_path ...passed
00:05:00.253    Test: test_race_between_reset_and_disconnected ...passed
00:05:00.253    Test: test_ctrlr_op_rpc ...passed
00:05:00.253    Test: test_bdev_ctrlr_op_rpc ...passed
00:05:00.253    Test: test_disable_enable_ctrlr ...[2024-12-13 23:38:30.690587] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  [2024-12-13 23:38:30.691221] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:05:00.253  passed
00:05:00.253    Test: test_delete_ctrlr_done ...passed
00:05:00.253    Test: test_ns_remove_during_reset ...passed
00:05:00.253  
00:05:00.253  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.253                suites      1      1    n/a      0        0
00:05:00.253                 tests     48     48     48      0        0
00:05:00.253               asserts   3553   3553   3553      0      n/a
00:05:00.253  
00:05:00.253  Elapsed time =    0.042 seconds
00:05:00.253   23:38:30	-- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:05:00.253  Test Options
00:05:00.253  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:05:00.253  
00:05:00.253  
00:05:00.253       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.253       http://cunit.sourceforge.net/
00:05:00.253  
00:05:00.253  
00:05:00.253  Suite: raid
00:05:00.253    Test: test_create_raid ...passed
00:05:00.253    Test: test_create_raid_superblock ...passed
00:05:00.253    Test: test_delete_raid ...passed
00:05:00.253    Test: test_create_raid_invalid_args ...[2024-12-13 23:38:30.744786] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:05:00.253  [2024-12-13 23:38:30.745325] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:05:00.253  [2024-12-13 23:38:30.745959] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:05:00.253  [2024-12-13 23:38:30.746353] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:05:00.253  [2024-12-13 23:38:30.747290] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:05:00.253  passed
00:05:00.253    Test: test_delete_raid_invalid_args ...passed
00:05:00.253    Test: test_io_channel ...passed
00:05:00.253    Test: test_reset_io ...passed
00:05:00.253    Test: test_write_io ...passed
00:05:00.253    Test: test_read_io ...passed
00:05:00.821    Test: test_unmap_io ...passed
00:05:00.821    Test: test_io_failure ...[2024-12-13 23:38:31.483692] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0
00:05:00.821  passed
00:05:00.821    Test: test_multi_raid_no_io ...passed
00:05:00.821    Test: test_multi_raid_with_io ...passed
00:05:00.821    Test: test_io_type_supported ...passed
00:05:00.821    Test: test_raid_json_dump_info ...passed
00:05:00.821    Test: test_context_size ...passed
00:05:00.821    Test: test_raid_level_conversions ...passed
00:05:00.821    Test: test_raid_process ...passed
00:05:00.821    Test: test_raid_io_split ...passed
00:05:00.821  
00:05:00.821  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.821                suites      1      1    n/a      0        0
00:05:00.821                 tests     19     19     19      0        0
00:05:00.821               asserts 177879 177879 177879      0      n/a
00:05:00.821  
00:05:00.821  Elapsed time =    0.748 seconds
00:05:00.821   23:38:31	-- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:05:00.821  
00:05:00.821  
00:05:00.821       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.821       http://cunit.sourceforge.net/
00:05:00.821  
00:05:00.821  
00:05:00.821  Suite: raid_sb
00:05:00.821    Test: test_raid_bdev_write_superblock ...passed
00:05:00.821    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:05:00.821    Test: test_raid_bdev_parse_superblock ...[2024-12-13 23:38:31.533470] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:05:00.821  passed
00:05:00.821  
00:05:00.821  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.821                suites      1      1    n/a      0        0
00:05:00.821                 tests      3      3      3      0        0
00:05:00.821               asserts     32     32     32      0      n/a
00:05:00.821  
00:05:00.821  Elapsed time =    0.001 seconds
00:05:01.080   23:38:31	-- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:05:01.080  
00:05:01.080  
00:05:01.080       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.080       http://cunit.sourceforge.net/
00:05:01.080  
00:05:01.080  
00:05:01.080  Suite: concat
00:05:01.080    Test: test_concat_start ...passed
00:05:01.080    Test: test_concat_rw ...passed
00:05:01.080    Test: test_concat_null_payload ...passed
00:05:01.080  
00:05:01.080  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.080                suites      1      1    n/a      0        0
00:05:01.080                 tests      3      3      3      0        0
00:05:01.080               asserts   8097   8097   8097      0      n/a
00:05:01.080  
00:05:01.080  Elapsed time =    0.007 seconds
00:05:01.080   23:38:31	-- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:05:01.080  
00:05:01.080  
00:05:01.080       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.080       http://cunit.sourceforge.net/
00:05:01.080  
00:05:01.080  
00:05:01.080  Suite: raid1
00:05:01.080    Test: test_raid1_start ...passed
00:05:01.080    Test: test_raid1_read_balancing ...passed
00:05:01.080  
00:05:01.081  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.081                suites      1      1    n/a      0        0
00:05:01.081                 tests      2      2      2      0        0
00:05:01.081               asserts   2856   2856   2856      0      n/a
00:05:01.081  
00:05:01.081  Elapsed time =    0.004 seconds
00:05:01.081   23:38:31	-- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:05:01.081  
00:05:01.081  
00:05:01.081       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.081       http://cunit.sourceforge.net/
00:05:01.081  
00:05:01.081  
00:05:01.081  Suite: zone
00:05:01.081    Test: test_zone_get_operation ...passed
00:05:01.081    Test: test_bdev_zone_get_info ...passed
00:05:01.081    Test: test_bdev_zone_management ...passed
00:05:01.081    Test: test_bdev_zone_append ...passed
00:05:01.081    Test: test_bdev_zone_append_with_md ...passed
00:05:01.081    Test: test_bdev_zone_appendv ...passed
00:05:01.081    Test: test_bdev_zone_appendv_with_md ...passed
00:05:01.081    Test: test_bdev_io_get_append_location ...passed
00:05:01.081  
00:05:01.081  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.081                suites      1      1    n/a      0        0
00:05:01.081                 tests      8      8      8      0        0
00:05:01.081               asserts     94     94     94      0      n/a
00:05:01.081  
00:05:01.081  Elapsed time =    0.001 seconds
00:05:01.081   23:38:31	-- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:05:01.081  
00:05:01.081  
00:05:01.081       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.081       http://cunit.sourceforge.net/
00:05:01.081  
00:05:01.081  
00:05:01.081  Suite: gpt_parse
00:05:01.081    Test: test_parse_mbr_and_primary ...[2024-12-13 23:38:31.678864] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:05:01.081  [2024-12-13 23:38:31.679417] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:05:01.081  [2024-12-13 23:38:31.679674] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:05:01.081  [2024-12-13 23:38:31.679937] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:05:01.081  [2024-12-13 23:38:31.680147] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:05:01.081  [2024-12-13 23:38:31.680442] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:05:01.081  passed
00:05:01.081    Test: test_parse_secondary ...[2024-12-13 23:38:31.682054] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:05:01.081  [2024-12-13 23:38:31.682288] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:05:01.081  [2024-12-13 23:38:31.682442] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:05:01.081  [2024-12-13 23:38:31.682587] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:05:01.081  passed
00:05:01.081    Test: test_check_mbr ...[2024-12-13 23:38:31.683653] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:05:01.081  [2024-12-13 23:38:31.683847] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:05:01.081  passed
00:05:01.081    Test: test_read_header ...[2024-12-13 23:38:31.684208] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:05:01.081  [2024-12-13 23:38:31.684367] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:05:01.081  [2024-12-13 23:38:31.684577] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:05:01.081  [2024-12-13 23:38:31.684743] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:05:01.081  [2024-12-13 23:38:31.684903] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:05:01.081  [2024-12-13 23:38:31.685057] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:05:01.081  passed
00:05:01.081    Test: test_read_partitions ...[2024-12-13 23:38:31.685419] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:05:01.081  [2024-12-13 23:38:31.685662] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:05:01.081  [2024-12-13 23:38:31.685843] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:05:01.081  [2024-12-13 23:38:31.686003] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:05:01.081  [2024-12-13 23:38:31.686551] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:05:01.081  passed
00:05:01.081  
00:05:01.081  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.081                suites      1      1    n/a      0        0
00:05:01.081                 tests      5      5      5      0        0
00:05:01.081               asserts     33     33     33      0      n/a
00:05:01.081  
00:05:01.081  Elapsed time =    0.006 seconds
00:05:01.081   23:38:31	-- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:05:01.081  
00:05:01.081  
00:05:01.081       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.081       http://cunit.sourceforge.net/
00:05:01.081  
00:05:01.081  
00:05:01.081  Suite: bdev_part
00:05:01.081    Test: part_test ...[2024-12-13 23:38:31.717267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists
00:05:01.081  passed
00:05:01.081    Test: part_free_test ...passed
00:05:01.081    Test: part_get_io_channel_test ...passed
00:05:01.081    Test: part_construct_ext ...passed
00:05:01.081  
00:05:01.081  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.081                suites      1      1    n/a      0        0
00:05:01.081                 tests      4      4      4      0        0
00:05:01.081               asserts     48     48     48      0      n/a
00:05:01.081  
00:05:01.081  Elapsed time =    0.053 seconds
00:05:01.081   23:38:31	-- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:05:01.081  
00:05:01.081  
00:05:01.081       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.081       http://cunit.sourceforge.net/
00:05:01.081  
00:05:01.081  
00:05:01.081  Suite: scsi_nvme_suite
00:05:01.081    Test: scsi_nvme_translate_test ...passed
00:05:01.081  
00:05:01.081  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.081                suites      1      1    n/a      0        0
00:05:01.081                 tests      1      1      1      0        0
00:05:01.081               asserts    104    104    104      0      n/a
00:05:01.081  
00:05:01.081  Elapsed time =    0.000 seconds
00:05:01.341   23:38:31	-- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:05:01.341  
00:05:01.341  
00:05:01.341       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.341       http://cunit.sourceforge.net/
00:05:01.341  
00:05:01.341  
00:05:01.341  Suite: lvol
00:05:01.341    Test: ut_lvs_init ...[2024-12-13 23:38:31.842736] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:05:01.341  [2024-12-13 23:38:31.843337] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device
00:05:01.341  passed
00:05:01.341    Test: ut_lvol_init ...passed
00:05:01.341    Test: ut_lvol_snapshot ...passed
00:05:01.341    Test: ut_lvol_clone ...passed
00:05:01.341    Test: ut_lvs_destroy ...passed
00:05:01.341    Test: ut_lvs_unload ...passed
00:05:01.341    Test: ut_lvol_resize ...[2024-12-13 23:38:31.846463] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:05:01.341  passed
00:05:01.341    Test: ut_lvol_set_read_only ...passed
00:05:01.341    Test: ut_lvol_hotremove ...passed
00:05:01.341    Test: ut_vbdev_lvol_get_io_channel ...passed
00:05:01.341    Test: ut_vbdev_lvol_io_type_supported ...passed
00:05:01.341    Test: ut_lvol_read_write ...passed
00:05:01.341    Test: ut_vbdev_lvol_submit_request ...passed
00:05:01.341    Test: ut_lvol_examine_config ...passed
00:05:01.341    Test: ut_lvol_examine_disk ...[2024-12-13 23:38:31.849000] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:05:01.341  passed
00:05:01.341    Test: ut_lvol_rename ...[2024-12-13 23:38:31.850404] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:05:01.341  [2024-12-13 23:38:31.850672] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:05:01.341  passed
00:05:01.341    Test: ut_bdev_finish ...passed
00:05:01.341    Test: ut_lvs_rename ...passed
00:05:01.341    Test: ut_lvol_seek ...passed
00:05:01.341    Test: ut_esnap_dev_create ...[2024-12-13 23:38:31.852424] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:05:01.341  [2024-12-13 23:38:31.852648] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:05:01.341  [2024-12-13 23:38:31.852840] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:05:01.341  [2024-12-13 23:38:31.853026] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1
00:05:01.341  passed
00:05:01.341    Test: ut_lvol_esnap_clone_bad_args ...[2024-12-13 23:38:31.853563] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:05:01.341  [2024-12-13 23:38:31.853790] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19
00:05:01.341  passed
00:05:01.341  
00:05:01.341  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.341                suites      1      1    n/a      0        0
00:05:01.341                 tests     21     21     21      0        0
00:05:01.341               asserts    712    712    712      0      n/a
00:05:01.341  
00:05:01.341  Elapsed time =    0.006 seconds
00:05:01.341   23:38:31	-- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:05:01.341  
00:05:01.341  
00:05:01.341       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.341       http://cunit.sourceforge.net/
00:05:01.341  
00:05:01.341  
00:05:01.341  Suite: zone_block
00:05:01.341    Test: test_zone_block_create ...passed
00:05:01.341    Test: test_zone_block_create_invalid ...[2024-12-13 23:38:31.905430] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:05:01.341  [2024-12-13 23:38:31.905922] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-13 23:38:31.906229] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:05:01.341  [2024-12-13 23:38:31.906400] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-13 23:38:31.906657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:05:01.341  [2024-12-13 23:38:31.906841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-12-13 23:38:31.907100] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:05:01.341  [2024-12-13 23:38:31.907289] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed
00:05:01.341    Test: test_get_zone_info ...[2024-12-13 23:38:31.908155] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.908367] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.908546] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  passed
00:05:01.341    Test: test_supported_io_types ...passed
00:05:01.341    Test: test_reset_zone ...[2024-12-13 23:38:31.910056] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.910263] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  passed
00:05:01.341    Test: test_open_zone ...[2024-12-13 23:38:31.911093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.911927] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.912128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  passed
00:05:01.341    Test: test_zone_write ...[2024-12-13 23:38:31.912909] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:05:01.341  [2024-12-13 23:38:31.913093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.913295] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:05:01.341  [2024-12-13 23:38:31.913486] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.919087] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:05:01.341  [2024-12-13 23:38:31.919294] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.919412] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:05:01.341  [2024-12-13 23:38:31.919664] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  [2024-12-13 23:38:31.925731] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:05:01.341  [2024-12-13 23:38:31.925946] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.341  passed
00:05:01.341    Test: test_zone_read ...[2024-12-13 23:38:31.926800] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:05:01.342  [2024-12-13 23:38:31.927009] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.927207] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:05:01.342  [2024-12-13 23:38:31.927368] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.927985] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:05:01.342  [2024-12-13 23:38:31.928171] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  passed
00:05:01.342    Test: test_close_zone ...[2024-12-13 23:38:31.928900] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.929134] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.929510] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.929720] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  passed
00:05:01.342    Test: test_finish_zone ...[2024-12-13 23:38:31.930757] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.930975] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  passed
00:05:01.342    Test: test_append_zone ...[2024-12-13 23:38:31.931649] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:05:01.342  [2024-12-13 23:38:31.931820] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.932004] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:05:01.342  [2024-12-13 23:38:31.932144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  [2024-12-13 23:38:31.944762] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:05:01.342  [2024-12-13 23:38:31.944948] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:05:01.342  passed
00:05:01.342  
00:05:01.342  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:01.342                suites      1      1    n/a      0        0
00:05:01.342                 tests     11     11     11      0        0
00:05:01.342               asserts   3437   3437   3437      0      n/a
00:05:01.342  
00:05:01.342  Elapsed time =    0.035 seconds
00:05:01.342   23:38:31	-- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:05:01.342  
00:05:01.342  
00:05:01.342       CUnit - A unit testing framework for C - Version 2.1-3
00:05:01.342       http://cunit.sourceforge.net/
00:05:01.342  
00:05:01.342  
00:05:01.342  Suite: bdev
00:05:01.342    Test: basic ...[2024-12-13 23:38:32.036247] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558195b62401): Operation not permitted (rc=-1)
00:05:01.342  [2024-12-13 23:38:32.036739] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x558195b623c0): Operation not permitted (rc=-1)
00:05:01.342  [2024-12-13 23:38:32.036930] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558195b62401): Operation not permitted (rc=-1)
00:05:01.342  passed
00:05:01.601    Test: unregister_and_close ...passed
00:05:01.601    Test: unregister_and_close_different_threads ...passed
00:05:01.601    Test: basic_qos ...passed
00:05:01.601    Test: put_channel_during_reset ...passed
00:05:01.601    Test: aborted_reset ...passed
00:05:01.860    Test: aborted_reset_no_outstanding_io ...passed
00:05:01.860    Test: io_during_reset ...passed
00:05:01.860    Test: reset_completions ...passed
00:05:01.860    Test: io_during_qos_queue ...passed
00:05:01.860    Test: io_during_qos_reset ...passed
00:05:01.860    Test: enomem ...passed
00:05:01.860    Test: enomem_multi_bdev ...passed
00:05:01.860    Test: enomem_multi_bdev_unregister ...passed
00:05:02.120    Test: enomem_multi_io_target ...passed
00:05:02.120    Test: qos_dynamic_enable ...passed
00:05:02.120    Test: bdev_histograms_mt ...passed
00:05:02.120    Test: bdev_set_io_timeout_mt ...[2024-12-13 23:38:32.725701] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered
00:05:02.120  passed
00:05:02.120    Test: lock_lba_range_then_submit_io ...[2024-12-13 23:38:32.741403] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x558195b62380 already registered (old:0x6130000003c0 new:0x613000000c80)
00:05:02.120  passed
00:05:02.120    Test: unregister_during_reset ...passed
00:05:02.120    Test: event_notify_and_close ...passed
00:05:02.379    Test: unregister_and_qos_poller ...passed
00:05:02.379  Suite: bdev_wrong_thread
00:05:02.379    Test: spdk_bdev_register_wt ...[2024-12-13 23:38:32.856838] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480)
00:05:02.379  passed
00:05:02.379    Test: spdk_bdev_examine_wt ...[2024-12-13 23:38:32.857195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480)
00:05:02.379  passed
00:05:02.379  
00:05:02.379  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:02.379                suites      2      2    n/a      0        0
00:05:02.379                 tests     24     24     24      0        0
00:05:02.379               asserts    621    621    621      0      n/a
00:05:02.379  
00:05:02.379  Elapsed time =    0.844 seconds
00:05:02.379  ************************************
00:05:02.379  END TEST unittest_bdev
00:05:02.379  ************************************
00:05:02.379  
00:05:02.379  real	0m3.639s
00:05:02.379  user	0m1.709s
00:05:02.379  sys	0m1.869s
00:05:02.379   23:38:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:02.379   23:38:32	-- common/autotest_common.sh@10 -- # set +x
00:05:02.379   23:38:32	-- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:05:02.379   23:38:32	-- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:05:02.379   23:38:32	-- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:05:02.379   23:38:32	-- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:05:02.379   23:38:32	-- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut
00:05:02.379   23:38:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:02.379   23:38:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:02.379   23:38:32	-- common/autotest_common.sh@10 -- # set +x
00:05:02.379  ************************************
00:05:02.379  START TEST unittest_bdev_raid5f
00:05:02.380  ************************************
00:05:02.380   23:38:32	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut
00:05:02.380  
00:05:02.380  
00:05:02.380       CUnit - A unit testing framework for C - Version 2.1-3
00:05:02.380       http://cunit.sourceforge.net/
00:05:02.380  
00:05:02.380  
00:05:02.380  Suite: raid5f
00:05:02.380    Test: test_raid5f_start ...passed
00:05:02.947    Test: test_raid5f_submit_read_request ...passed
00:05:02.947    Test: test_raid5f_stripe_request_map_iovecs ...passed
00:05:06.235    Test: test_raid5f_submit_full_stripe_write_request ...passed
00:05:21.137    Test: test_raid5f_chunk_write_error ...passed
00:05:26.430    Test: test_raid5f_chunk_write_error_with_enomem ...passed
00:05:28.964    Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed
00:05:50.895    Test: test_raid5f_submit_read_request_degraded ...passed
00:05:50.895  
00:05:50.895  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:50.895                suites      1      1    n/a      0        0
00:05:50.895                 tests      8      8      8      0        0
00:05:50.895               asserts 351864 351864 351864      0      n/a
00:05:50.895  
00:05:50.895  Elapsed time =   48.369 seconds
00:05:50.895  ************************************
00:05:50.895  END TEST unittest_bdev_raid5f
00:05:50.895  ************************************
00:05:50.895  
00:05:50.895  real	0m48.463s
00:05:50.895  user	0m45.950s
00:05:50.895  sys	0m2.505s
00:05:50.895   23:39:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:50.895   23:39:21	-- common/autotest_common.sh@10 -- # set +x
00:05:50.895   23:39:21	-- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob
00:05:50.895   23:39:21	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:50.895   23:39:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:50.895   23:39:21	-- common/autotest_common.sh@10 -- # set +x
00:05:50.895  ************************************
00:05:50.895  START TEST unittest_blob_blobfs
00:05:50.895  ************************************
00:05:50.895   23:39:21	-- common/autotest_common.sh@1114 -- # unittest_blob
00:05:50.895   23:39:21	-- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:05:50.895   23:39:21	-- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:05:50.895  
00:05:50.895  
00:05:50.895       CUnit - A unit testing framework for C - Version 2.1-3
00:05:50.895       http://cunit.sourceforge.net/
00:05:50.895  
00:05:50.895  
00:05:50.895  Suite: blob_nocopy_noextent
00:05:50.895    Test: blob_init ...[2024-12-13 23:39:21.503216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:05:50.895  passed
00:05:50.895    Test: blob_thin_provision ...passed
00:05:50.895    Test: blob_read_only ...passed
00:05:50.895    Test: bs_load ...[2024-12-13 23:39:21.606805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:05:50.895  passed
00:05:50.895    Test: bs_load_custom_cluster_size ...passed
00:05:51.154    Test: bs_load_after_failed_grow ...passed
00:05:51.154    Test: bs_cluster_sz ...[2024-12-13 23:39:21.643798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:05:51.154  [2024-12-13 23:39:21.644275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:05:51.154  [2024-12-13 23:39:21.644442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:05:51.154  passed
00:05:51.154    Test: bs_resize_md ...passed
00:05:51.154    Test: bs_destroy ...passed
00:05:51.154    Test: bs_type ...passed
00:05:51.154    Test: bs_super_block ...passed
00:05:51.154    Test: bs_test_recover_cluster_count ...passed
00:05:51.154    Test: bs_grow_live ...passed
00:05:51.154    Test: bs_grow_live_no_space ...passed
00:05:51.154    Test: bs_test_grow ...passed
00:05:51.154    Test: blob_serialize_test ...passed
00:05:51.154    Test: super_block_crc ...passed
00:05:51.154    Test: blob_thin_prov_write_count_io ...passed
00:05:51.154    Test: bs_load_iter_test ...passed
00:05:51.154    Test: blob_relations ...[2024-12-13 23:39:21.856606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.856737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  [2024-12-13 23:39:21.857778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.857864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  passed
00:05:51.154    Test: blob_relations2 ...[2024-12-13 23:39:21.876420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.876502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  [2024-12-13 23:39:21.876559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.876580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  [2024-12-13 23:39:21.878161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.878238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  [2024-12-13 23:39:21.878704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:51.154  [2024-12-13 23:39:21.878774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.154  passed
00:05:51.413    Test: blob_relations3 ...passed
00:05:51.413    Test: blobstore_clean_power_failure ...passed
00:05:51.413    Test: blob_delete_snapshot_power_failure ...[2024-12-13 23:39:22.081332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:51.413  [2024-12-13 23:39:22.097399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:51.413  [2024-12-13 23:39:22.097511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:51.413  [2024-12-13 23:39:22.097557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.413  [2024-12-13 23:39:22.113662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:51.413  [2024-12-13 23:39:22.113765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:05:51.413  [2024-12-13 23:39:22.113818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:51.413  [2024-12-13 23:39:22.113856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.413  [2024-12-13 23:39:22.130068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:05:51.413  [2024-12-13 23:39:22.130215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.672  [2024-12-13 23:39:22.146484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:05:51.672  [2024-12-13 23:39:22.146648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.672  [2024-12-13 23:39:22.163398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:05:51.672  [2024-12-13 23:39:22.163521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:51.672  passed
00:05:51.672    Test: blob_create_snapshot_power_failure ...[2024-12-13 23:39:22.211384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:51.672  [2024-12-13 23:39:22.242508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:51.672  [2024-12-13 23:39:22.258534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:05:51.672  passed
00:05:51.672    Test: blob_io_unit ...passed
00:05:51.672    Test: blob_io_unit_compatibility ...passed
00:05:51.672    Test: blob_ext_md_pages ...passed
00:05:51.672    Test: blob_esnap_io_4096_4096 ...passed
00:05:51.930    Test: blob_esnap_io_512_512 ...passed
00:05:51.930    Test: blob_esnap_io_4096_512 ...passed
00:05:51.930    Test: blob_esnap_io_512_4096 ...passed
00:05:51.930  Suite: blob_bs_nocopy_noextent
00:05:51.930    Test: blob_open ...passed
00:05:51.930    Test: blob_create ...[2024-12-13 23:39:22.562435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:05:51.930  passed
00:05:51.930    Test: blob_create_loop ...passed
00:05:52.189    Test: blob_create_fail ...[2024-12-13 23:39:22.683021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:52.189  passed
00:05:52.189    Test: blob_create_internal ...passed
00:05:52.189    Test: blob_create_zero_extent ...passed
00:05:52.189    Test: blob_snapshot ...passed
00:05:52.189    Test: blob_clone ...passed
00:05:52.189    Test: blob_inflate ...[2024-12-13 23:39:22.921505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:05:52.448  passed
00:05:52.448    Test: blob_delete ...passed
00:05:52.448    Test: blob_resize_test ...[2024-12-13 23:39:23.007536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:05:52.448  passed
00:05:52.448    Test: channel_ops ...passed
00:05:52.448    Test: blob_super ...passed
00:05:52.448    Test: blob_rw_verify_iov ...passed
00:05:52.706    Test: blob_unmap ...passed
00:05:52.706    Test: blob_iter ...passed
00:05:52.706    Test: blob_parse_md ...passed
00:05:52.706    Test: bs_load_pending_removal ...passed
00:05:52.706    Test: bs_unload ...[2024-12-13 23:39:23.350739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:05:52.706  passed
00:05:52.706    Test: bs_usable_clusters ...passed
00:05:52.706    Test: blob_crc ...[2024-12-13 23:39:23.438186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:05:52.706  [2024-12-13 23:39:23.438373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:05:52.965  passed
00:05:52.965    Test: blob_flags ...passed
00:05:52.965    Test: bs_version ...passed
00:05:52.965    Test: blob_set_xattrs_test ...[2024-12-13 23:39:23.569840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:52.965  [2024-12-13 23:39:23.569992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:52.965  passed
00:05:53.224    Test: blob_thin_prov_alloc ...passed
00:05:53.224    Test: blob_insert_cluster_msg_test ...passed
00:05:53.224    Test: blob_thin_prov_rw ...passed
00:05:53.224    Test: blob_thin_prov_rle ...passed
00:05:53.224    Test: blob_thin_prov_rw_iov ...passed
00:05:53.224    Test: blob_snapshot_rw ...passed
00:05:53.482    Test: blob_snapshot_rw_iov ...passed
00:05:53.741    Test: blob_inflate_rw ...passed
00:05:53.741    Test: blob_snapshot_freeze_io ...passed
00:05:53.741    Test: blob_operation_split_rw ...passed
00:05:54.000    Test: blob_operation_split_rw_iov ...passed
00:05:54.000    Test: blob_simultaneous_operations ...[2024-12-13 23:39:24.573121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:54.000  [2024-12-13 23:39:24.573261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:54.000  [2024-12-13 23:39:24.574637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:54.000  [2024-12-13 23:39:24.574688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:54.000  [2024-12-13 23:39:24.586577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:54.000  [2024-12-13 23:39:24.586640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:54.000  [2024-12-13 23:39:24.586822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:54.000  [2024-12-13 23:39:24.586861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:54.000  passed
00:05:54.000    Test: blob_persist_test ...passed
00:05:54.000    Test: blob_decouple_snapshot ...passed
00:05:54.259    Test: blob_seek_io_unit ...passed
00:05:54.259    Test: blob_nested_freezes ...passed
00:05:54.259  Suite: blob_blob_nocopy_noextent
00:05:54.259    Test: blob_write ...passed
00:05:54.259    Test: blob_read ...passed
00:05:54.259    Test: blob_rw_verify ...passed
00:05:54.517    Test: blob_rw_verify_iov_nomem ...passed
00:05:54.517    Test: blob_rw_iov_read_only ...passed
00:05:54.517    Test: blob_xattr ...passed
00:05:54.517    Test: blob_dirty_shutdown ...passed
00:05:54.517    Test: blob_is_degraded ...passed
00:05:54.517  Suite: blob_esnap_bs_nocopy_noextent
00:05:54.517    Test: blob_esnap_create ...passed
00:05:54.785    Test: blob_esnap_thread_add_remove ...passed
00:05:54.785    Test: blob_esnap_clone_snapshot ...passed
00:05:54.785    Test: blob_esnap_clone_inflate ...passed
00:05:54.785    Test: blob_esnap_clone_decouple ...passed
00:05:54.785    Test: blob_esnap_clone_reload ...passed
00:05:54.785    Test: blob_esnap_hotplug ...passed
00:05:54.785  Suite: blob_nocopy_extent
00:05:54.785    Test: blob_init ...[2024-12-13 23:39:25.469184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:05:54.785  passed
00:05:54.785    Test: blob_thin_provision ...passed
00:05:55.068    Test: blob_read_only ...passed
00:05:55.068    Test: bs_load ...[2024-12-13 23:39:25.529263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:05:55.068  passed
00:05:55.068    Test: bs_load_custom_cluster_size ...passed
00:05:55.068    Test: bs_load_after_failed_grow ...passed
00:05:55.068    Test: bs_cluster_sz ...[2024-12-13 23:39:25.561973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:05:55.068  [2024-12-13 23:39:25.562271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:05:55.068  [2024-12-13 23:39:25.562357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:05:55.068  passed
00:05:55.068    Test: bs_resize_md ...passed
00:05:55.068    Test: bs_destroy ...passed
00:05:55.068    Test: bs_type ...passed
00:05:55.068    Test: bs_super_block ...passed
00:05:55.068    Test: bs_test_recover_cluster_count ...passed
00:05:55.068    Test: bs_grow_live ...passed
00:05:55.068    Test: bs_grow_live_no_space ...passed
00:05:55.068    Test: bs_test_grow ...passed
00:05:55.068    Test: blob_serialize_test ...passed
00:05:55.068    Test: super_block_crc ...passed
00:05:55.068    Test: blob_thin_prov_write_count_io ...passed
00:05:55.068    Test: bs_load_iter_test ...passed
00:05:55.068    Test: blob_relations ...[2024-12-13 23:39:25.761350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.068  [2024-12-13 23:39:25.761493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.068  [2024-12-13 23:39:25.762513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.068  [2024-12-13 23:39:25.762604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.068  passed
00:05:55.068    Test: blob_relations2 ...[2024-12-13 23:39:25.780299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.068  [2024-12-13 23:39:25.780386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.068  [2024-12-13 23:39:25.780433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.068  [2024-12-13 23:39:25.780465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.069  [2024-12-13 23:39:25.781966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.069  [2024-12-13 23:39:25.782037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.069  [2024-12-13 23:39:25.782485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:55.069  [2024-12-13 23:39:25.782544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.069  passed
00:05:55.328    Test: blob_relations3 ...passed
00:05:55.328    Test: blobstore_clean_power_failure ...passed
00:05:55.328    Test: blob_delete_snapshot_power_failure ...[2024-12-13 23:39:25.982274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:05:55.328  [2024-12-13 23:39:25.998619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:05:55.328  [2024-12-13 23:39:26.016583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:55.328  [2024-12-13 23:39:26.016700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:55.328  [2024-12-13 23:39:26.016736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.328  [2024-12-13 23:39:26.035291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:05:55.328  [2024-12-13 23:39:26.035425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:05:55.328  [2024-12-13 23:39:26.035467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:55.328  [2024-12-13 23:39:26.035500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.328  [2024-12-13 23:39:26.053982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:05:55.328  [2024-12-13 23:39:26.054066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:05:55.328  [2024-12-13 23:39:26.054114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:55.328  [2024-12-13 23:39:26.054168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.587  [2024-12-13 23:39:26.071441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:05:55.587  [2024-12-13 23:39:26.071576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.587  [2024-12-13 23:39:26.088116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:05:55.587  [2024-12-13 23:39:26.088250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.587  [2024-12-13 23:39:26.104707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:05:55.587  [2024-12-13 23:39:26.104821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:55.587  passed
00:05:55.587    Test: blob_create_snapshot_power_failure ...[2024-12-13 23:39:26.152371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:55.587  [2024-12-13 23:39:26.168662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:05:55.587  [2024-12-13 23:39:26.199975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:05:55.587  [2024-12-13 23:39:26.215878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:05:55.587  passed
00:05:55.587    Test: blob_io_unit ...passed
00:05:55.587    Test: blob_io_unit_compatibility ...passed
00:05:55.587    Test: blob_ext_md_pages ...passed
00:05:55.846    Test: blob_esnap_io_4096_4096 ...passed
00:05:55.846    Test: blob_esnap_io_512_512 ...passed
00:05:55.846    Test: blob_esnap_io_4096_512 ...passed
00:05:55.846    Test: blob_esnap_io_512_4096 ...passed
00:05:55.846  Suite: blob_bs_nocopy_extent
00:05:55.846    Test: blob_open ...passed
00:05:55.846    Test: blob_create ...[2024-12-13 23:39:26.513976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:05:55.846  passed
00:05:56.104    Test: blob_create_loop ...passed
00:05:56.104    Test: blob_create_fail ...[2024-12-13 23:39:26.634584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:56.104  passed
00:05:56.104    Test: blob_create_internal ...passed
00:05:56.104    Test: blob_create_zero_extent ...passed
00:05:56.104    Test: blob_snapshot ...passed
00:05:56.104    Test: blob_clone ...passed
00:05:56.364    Test: blob_inflate ...[2024-12-13 23:39:26.867329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:05:56.364  passed
00:05:56.364    Test: blob_delete ...passed
00:05:56.364    Test: blob_resize_test ...[2024-12-13 23:39:26.951943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:05:56.364  passed
00:05:56.364    Test: channel_ops ...passed
00:05:56.364    Test: blob_super ...passed
00:05:56.622    Test: blob_rw_verify_iov ...passed
00:05:56.622    Test: blob_unmap ...passed
00:05:56.622    Test: blob_iter ...passed
00:05:56.622    Test: blob_parse_md ...passed
00:05:56.622    Test: bs_load_pending_removal ...passed
00:05:56.622    Test: bs_unload ...[2024-12-13 23:39:27.301789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:05:56.622  passed
00:05:56.881    Test: bs_usable_clusters ...passed
00:05:56.881    Test: blob_crc ...[2024-12-13 23:39:27.388337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:05:56.881  [2024-12-13 23:39:27.388488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:05:56.881  passed
00:05:56.881    Test: blob_flags ...passed
00:05:56.881    Test: bs_version ...passed
00:05:56.881    Test: blob_set_xattrs_test ...[2024-12-13 23:39:27.517779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:56.881  [2024-12-13 23:39:27.517916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:56.881  passed
00:05:57.140    Test: blob_thin_prov_alloc ...passed
00:05:57.140    Test: blob_insert_cluster_msg_test ...passed
00:05:57.140    Test: blob_thin_prov_rw ...passed
00:05:57.140    Test: blob_thin_prov_rle ...passed
00:05:57.140    Test: blob_thin_prov_rw_iov ...passed
00:05:57.140    Test: blob_snapshot_rw ...passed
00:05:57.398    Test: blob_snapshot_rw_iov ...passed
00:05:57.657    Test: blob_inflate_rw ...passed
00:05:57.657    Test: blob_snapshot_freeze_io ...passed
00:05:57.657    Test: blob_operation_split_rw ...passed
00:05:57.916    Test: blob_operation_split_rw_iov ...passed
00:05:57.916    Test: blob_simultaneous_operations ...[2024-12-13 23:39:28.498979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:57.916  [2024-12-13 23:39:28.499111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:57.916  [2024-12-13 23:39:28.500354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:57.916  [2024-12-13 23:39:28.500429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:57.916  [2024-12-13 23:39:28.511639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:57.916  [2024-12-13 23:39:28.511697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:57.916  [2024-12-13 23:39:28.511820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:05:57.916  [2024-12-13 23:39:28.511846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:57.916  passed
00:05:57.917    Test: blob_persist_test ...passed
00:05:58.175    Test: blob_decouple_snapshot ...passed
00:05:58.175    Test: blob_seek_io_unit ...passed
00:05:58.175    Test: blob_nested_freezes ...passed
00:05:58.175  Suite: blob_blob_nocopy_extent
00:05:58.175    Test: blob_write ...passed
00:05:58.175    Test: blob_read ...passed
00:05:58.175    Test: blob_rw_verify ...passed
00:05:58.434    Test: blob_rw_verify_iov_nomem ...passed
00:05:58.434    Test: blob_rw_iov_read_only ...passed
00:05:58.434    Test: blob_xattr ...passed
00:05:58.434    Test: blob_dirty_shutdown ...passed
00:05:58.434    Test: blob_is_degraded ...passed
00:05:58.434  Suite: blob_esnap_bs_nocopy_extent
00:05:58.434    Test: blob_esnap_create ...passed
00:05:58.693    Test: blob_esnap_thread_add_remove ...passed
00:05:58.693    Test: blob_esnap_clone_snapshot ...passed
00:05:58.693    Test: blob_esnap_clone_inflate ...passed
00:05:58.693    Test: blob_esnap_clone_decouple ...passed
00:05:58.693    Test: blob_esnap_clone_reload ...passed
00:05:58.693    Test: blob_esnap_hotplug ...passed
00:05:58.693  Suite: blob_copy_noextent
00:05:58.693    Test: blob_init ...[2024-12-13 23:39:29.393184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:05:58.693  passed
00:05:58.693    Test: blob_thin_provision ...passed
00:05:58.951    Test: blob_read_only ...passed
00:05:58.952    Test: bs_load ...[2024-12-13 23:39:29.451739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:05:58.952  passed
00:05:58.952    Test: bs_load_custom_cluster_size ...passed
00:05:58.952    Test: bs_load_after_failed_grow ...passed
00:05:58.952    Test: bs_cluster_sz ...[2024-12-13 23:39:29.483040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:05:58.952  [2024-12-13 23:39:29.483267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:05:58.952  [2024-12-13 23:39:29.483315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:05:58.952  passed
00:05:58.952    Test: bs_resize_md ...passed
00:05:58.952    Test: bs_destroy ...passed
00:05:58.952    Test: bs_type ...passed
00:05:58.952    Test: bs_super_block ...passed
00:05:58.952    Test: bs_test_recover_cluster_count ...passed
00:05:58.952    Test: bs_grow_live ...passed
00:05:58.952    Test: bs_grow_live_no_space ...passed
00:05:58.952    Test: bs_test_grow ...passed
00:05:58.952    Test: blob_serialize_test ...passed
00:05:58.952    Test: super_block_crc ...passed
00:05:58.952    Test: blob_thin_prov_write_count_io ...passed
00:05:58.952    Test: bs_load_iter_test ...passed
00:05:58.952    Test: blob_relations ...[2024-12-13 23:39:29.671104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:58.952  [2024-12-13 23:39:29.671234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:58.952  [2024-12-13 23:39:29.671852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:58.952  [2024-12-13 23:39:29.671892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:58.952  passed
00:05:59.211    Test: blob_relations2 ...[2024-12-13 23:39:29.688808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:59.211  [2024-12-13 23:39:29.688887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  [2024-12-13 23:39:29.688930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:59.211  [2024-12-13 23:39:29.688947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  [2024-12-13 23:39:29.689963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:59.211  [2024-12-13 23:39:29.690031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  [2024-12-13 23:39:29.690357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:05:59.211  [2024-12-13 23:39:29.690410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  passed
00:05:59.211    Test: blob_relations3 ...passed
00:05:59.211    Test: blobstore_clean_power_failure ...passed
00:05:59.211    Test: blob_delete_snapshot_power_failure ...[2024-12-13 23:39:29.891062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:59.211  [2024-12-13 23:39:29.906426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:59.211  [2024-12-13 23:39:29.906531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:59.211  [2024-12-13 23:39:29.906562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  [2024-12-13 23:39:29.921873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:59.211  [2024-12-13 23:39:29.921952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:05:59.211  [2024-12-13 23:39:29.922001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:05:59.211  [2024-12-13 23:39:29.922025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.211  [2024-12-13 23:39:29.937674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:05:59.211  [2024-12-13 23:39:29.937799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.471  [2024-12-13 23:39:29.953542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:05:59.471  [2024-12-13 23:39:29.953679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.471  [2024-12-13 23:39:29.969303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:05:59.471  [2024-12-13 23:39:29.969417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:05:59.471  passed
00:05:59.471    Test: blob_create_snapshot_power_failure ...[2024-12-13 23:39:30.015161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:05:59.471  [2024-12-13 23:39:30.047859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:05:59.471  [2024-12-13 23:39:30.064491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:05:59.471  passed
00:05:59.471    Test: blob_io_unit ...passed
00:05:59.471    Test: blob_io_unit_compatibility ...passed
00:05:59.471    Test: blob_ext_md_pages ...passed
00:05:59.730    Test: blob_esnap_io_4096_4096 ...passed
00:05:59.730    Test: blob_esnap_io_512_512 ...passed
00:05:59.730    Test: blob_esnap_io_4096_512 ...passed
00:05:59.730    Test: blob_esnap_io_512_4096 ...passed
00:05:59.730  Suite: blob_bs_copy_noextent
00:05:59.730    Test: blob_open ...passed
00:05:59.730    Test: blob_create ...[2024-12-13 23:39:30.391948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:05:59.730  passed
00:05:59.989    Test: blob_create_loop ...passed
00:05:59.989    Test: blob_create_fail ...[2024-12-13 23:39:30.506567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:05:59.989  passed
00:05:59.989    Test: blob_create_internal ...passed
00:05:59.989    Test: blob_create_zero_extent ...passed
00:05:59.989    Test: blob_snapshot ...passed
00:05:59.989    Test: blob_clone ...passed
00:06:00.248    Test: blob_inflate ...[2024-12-13 23:39:30.728444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:06:00.248  passed
00:06:00.248    Test: blob_delete ...passed
00:06:00.248    Test: blob_resize_test ...[2024-12-13 23:39:30.815746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:06:00.248  passed
00:06:00.248    Test: channel_ops ...passed
00:06:00.248    Test: blob_super ...passed
00:06:00.248    Test: blob_rw_verify_iov ...passed
00:06:00.507    Test: blob_unmap ...passed
00:06:00.507    Test: blob_iter ...passed
00:06:00.507    Test: blob_parse_md ...passed
00:06:00.507    Test: bs_load_pending_removal ...passed
00:06:00.507    Test: bs_unload ...[2024-12-13 23:39:31.162040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:06:00.507  passed
00:06:00.507    Test: bs_usable_clusters ...passed
00:06:00.765    Test: blob_crc ...[2024-12-13 23:39:31.248167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:06:00.765  [2024-12-13 23:39:31.248314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:06:00.765  passed
00:06:00.765    Test: blob_flags ...passed
00:06:00.765    Test: bs_version ...passed
00:06:00.765    Test: blob_set_xattrs_test ...[2024-12-13 23:39:31.386620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:06:00.765  [2024-12-13 23:39:31.386791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:06:00.765  passed
00:06:01.023    Test: blob_thin_prov_alloc ...passed
00:06:01.023    Test: blob_insert_cluster_msg_test ...passed
00:06:01.023    Test: blob_thin_prov_rw ...passed
00:06:01.023    Test: blob_thin_prov_rle ...passed
00:06:01.023    Test: blob_thin_prov_rw_iov ...passed
00:06:01.282    Test: blob_snapshot_rw ...passed
00:06:01.282    Test: blob_snapshot_rw_iov ...passed
00:06:01.541    Test: blob_inflate_rw ...passed
00:06:01.541    Test: blob_snapshot_freeze_io ...passed
00:06:01.541    Test: blob_operation_split_rw ...passed
00:06:01.801    Test: blob_operation_split_rw_iov ...passed
00:06:01.801    Test: blob_simultaneous_operations ...[2024-12-13 23:39:32.391160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:01.801  [2024-12-13 23:39:32.391260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:01.801  [2024-12-13 23:39:32.391762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:01.801  [2024-12-13 23:39:32.391803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:01.801  [2024-12-13 23:39:32.394757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:01.801  [2024-12-13 23:39:32.394832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:01.801  [2024-12-13 23:39:32.394925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:01.801  [2024-12-13 23:39:32.394947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:01.801  passed
00:06:01.801    Test: blob_persist_test ...passed
00:06:01.801    Test: blob_decouple_snapshot ...passed
00:06:02.060    Test: blob_seek_io_unit ...passed
00:06:02.060    Test: blob_nested_freezes ...passed
00:06:02.060  Suite: blob_blob_copy_noextent
00:06:02.060    Test: blob_write ...passed
00:06:02.060    Test: blob_read ...passed
00:06:02.060    Test: blob_rw_verify ...passed
00:06:02.060    Test: blob_rw_verify_iov_nomem ...passed
00:06:02.319    Test: blob_rw_iov_read_only ...passed
00:06:02.319    Test: blob_xattr ...passed
00:06:02.319    Test: blob_dirty_shutdown ...passed
00:06:02.319    Test: blob_is_degraded ...passed
00:06:02.319  Suite: blob_esnap_bs_copy_noextent
00:06:02.319    Test: blob_esnap_create ...passed
00:06:02.578    Test: blob_esnap_thread_add_remove ...passed
00:06:02.578    Test: blob_esnap_clone_snapshot ...passed
00:06:02.578    Test: blob_esnap_clone_inflate ...passed
00:06:02.578    Test: blob_esnap_clone_decouple ...passed
00:06:02.578    Test: blob_esnap_clone_reload ...passed
00:06:02.837    Test: blob_esnap_hotplug ...passed
00:06:02.837  Suite: blob_copy_extent
00:06:02.837    Test: blob_init ...[2024-12-13 23:39:33.355224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:06:02.837  passed
00:06:02.837    Test: blob_thin_provision ...passed
00:06:02.837    Test: blob_read_only ...passed
00:06:02.837    Test: bs_load ...[2024-12-13 23:39:33.436504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:06:02.837  passed
00:06:02.837    Test: bs_load_custom_cluster_size ...passed
00:06:02.837    Test: bs_load_after_failed_grow ...passed
00:06:02.837    Test: bs_cluster_sz ...[2024-12-13 23:39:33.474335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:06:02.837  [2024-12-13 23:39:33.474575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:06:02.837  [2024-12-13 23:39:33.474634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:06:02.837  passed
00:06:02.837    Test: bs_resize_md ...passed
00:06:02.837    Test: bs_destroy ...passed
00:06:02.837    Test: bs_type ...passed
00:06:03.096    Test: bs_super_block ...passed
00:06:03.096    Test: bs_test_recover_cluster_count ...passed
00:06:03.096    Test: bs_grow_live ...passed
00:06:03.096    Test: bs_grow_live_no_space ...passed
00:06:03.096    Test: bs_test_grow ...passed
00:06:03.096    Test: blob_serialize_test ...passed
00:06:03.096    Test: super_block_crc ...passed
00:06:03.096    Test: blob_thin_prov_write_count_io ...passed
00:06:03.096    Test: bs_load_iter_test ...passed
00:06:03.096    Test: blob_relations ...[2024-12-13 23:39:33.681280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.681409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  [2024-12-13 23:39:33.682480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.682551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  passed
00:06:03.096    Test: blob_relations2 ...[2024-12-13 23:39:33.701200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.701309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  [2024-12-13 23:39:33.701354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.701380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  [2024-12-13 23:39:33.702881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.702952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  [2024-12-13 23:39:33.703436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:06:03.096  [2024-12-13 23:39:33.703494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.096  passed
00:06:03.096    Test: blob_relations3 ...passed
00:06:03.355    Test: blobstore_clean_power_failure ...passed
00:06:03.355    Test: blob_delete_snapshot_power_failure ...[2024-12-13 23:39:33.928010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:06:03.355  [2024-12-13 23:39:33.945408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:06:03.355  [2024-12-13 23:39:33.962849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:06:03.355  [2024-12-13 23:39:33.962984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:06:03.355  [2024-12-13 23:39:33.963018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  [2024-12-13 23:39:33.984657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:06:03.355  [2024-12-13 23:39:33.984761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:06:03.355  [2024-12-13 23:39:33.984786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:06:03.355  [2024-12-13 23:39:33.984812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  [2024-12-13 23:39:34.001925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:06:03.355  [2024-12-13 23:39:34.002027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:06:03.355  [2024-12-13 23:39:34.002052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:06:03.355  [2024-12-13 23:39:34.002079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  [2024-12-13 23:39:34.019335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:06:03.355  [2024-12-13 23:39:34.019461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  [2024-12-13 23:39:34.036421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:06:03.355  [2024-12-13 23:39:34.036546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  [2024-12-13 23:39:34.053627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:06:03.355  [2024-12-13 23:39:34.053736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:03.355  passed
00:06:03.614    Test: blob_create_snapshot_power_failure ...[2024-12-13 23:39:34.103784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:06:03.614  [2024-12-13 23:39:34.120340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:06:03.614  [2024-12-13 23:39:34.152968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:06:03.614  [2024-12-13 23:39:34.169790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:06:03.614  passed
00:06:03.614    Test: blob_io_unit ...passed
00:06:03.614    Test: blob_io_unit_compatibility ...passed
00:06:03.614    Test: blob_ext_md_pages ...passed
00:06:03.614    Test: blob_esnap_io_4096_4096 ...passed
00:06:03.614    Test: blob_esnap_io_512_512 ...passed
00:06:03.873    Test: blob_esnap_io_4096_512 ...passed
00:06:03.873    Test: blob_esnap_io_512_4096 ...passed
00:06:03.873  Suite: blob_bs_copy_extent
00:06:03.873    Test: blob_open ...passed
00:06:03.873    Test: blob_create ...[2024-12-13 23:39:34.487217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:06:03.873  passed
00:06:03.873    Test: blob_create_loop ...passed
00:06:04.132    Test: blob_create_fail ...[2024-12-13 23:39:34.619419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:06:04.132  passed
00:06:04.132    Test: blob_create_internal ...passed
00:06:04.132    Test: blob_create_zero_extent ...passed
00:06:04.132    Test: blob_snapshot ...passed
00:06:04.132    Test: blob_clone ...passed
00:06:04.391    Test: blob_inflate ...[2024-12-13 23:39:34.869143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:06:04.391  passed
00:06:04.391    Test: blob_delete ...passed
00:06:04.391    Test: blob_resize_test ...[2024-12-13 23:39:34.965386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:06:04.391  passed
00:06:04.391    Test: channel_ops ...passed
00:06:04.391    Test: blob_super ...passed
00:06:04.391    Test: blob_rw_verify_iov ...passed
00:06:04.648    Test: blob_unmap ...passed
00:06:04.648    Test: blob_iter ...passed
00:06:04.648    Test: blob_parse_md ...passed
00:06:04.648    Test: bs_load_pending_removal ...passed
00:06:04.648    Test: bs_unload ...[2024-12-13 23:39:35.343999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:06:04.648  passed
00:06:04.906    Test: bs_usable_clusters ...passed
00:06:04.906    Test: blob_crc ...[2024-12-13 23:39:35.433391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:06:04.906  [2024-12-13 23:39:35.433518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:06:04.906  passed
00:06:04.906    Test: blob_flags ...passed
00:06:04.906    Test: bs_version ...passed
00:06:04.906    Test: blob_set_xattrs_test ...[2024-12-13 23:39:35.580845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:06:04.906  [2024-12-13 23:39:35.580971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:06:04.906  passed
00:06:05.166    Test: blob_thin_prov_alloc ...passed
00:06:05.166    Test: blob_insert_cluster_msg_test ...passed
00:06:05.166    Test: blob_thin_prov_rw ...passed
00:06:05.166    Test: blob_thin_prov_rle ...passed
00:06:05.425    Test: blob_thin_prov_rw_iov ...passed
00:06:05.425    Test: blob_snapshot_rw ...passed
00:06:05.425    Test: blob_snapshot_rw_iov ...passed
00:06:05.684    Test: blob_inflate_rw ...passed
00:06:05.684    Test: blob_snapshot_freeze_io ...passed
00:06:05.942    Test: blob_operation_split_rw ...passed
00:06:05.942    Test: blob_operation_split_rw_iov ...passed
00:06:05.942    Test: blob_simultaneous_operations ...[2024-12-13 23:39:36.614720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:05.942  [2024-12-13 23:39:36.614830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:05.942  [2024-12-13 23:39:36.615400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:05.942  [2024-12-13 23:39:36.615468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:05.942  [2024-12-13 23:39:36.618377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:05.942  [2024-12-13 23:39:36.618417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:05.942  [2024-12-13 23:39:36.618532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:06:05.942  [2024-12-13 23:39:36.618567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:06:05.942  passed
00:06:06.201    Test: blob_persist_test ...passed
00:06:06.201    Test: blob_decouple_snapshot ...passed
00:06:06.201    Test: blob_seek_io_unit ...passed
00:06:06.201    Test: blob_nested_freezes ...passed
00:06:06.201  Suite: blob_blob_copy_extent
00:06:06.201    Test: blob_write ...passed
00:06:06.201    Test: blob_read ...passed
00:06:06.459    Test: blob_rw_verify ...passed
00:06:06.459    Test: blob_rw_verify_iov_nomem ...passed
00:06:06.459    Test: blob_rw_iov_read_only ...passed
00:06:06.459    Test: blob_xattr ...passed
00:06:06.459    Test: blob_dirty_shutdown ...passed
00:06:06.718    Test: blob_is_degraded ...passed
00:06:06.718  Suite: blob_esnap_bs_copy_extent
00:06:06.718    Test: blob_esnap_create ...passed
00:06:06.718    Test: blob_esnap_thread_add_remove ...passed
00:06:06.718    Test: blob_esnap_clone_snapshot ...passed
00:06:06.718    Test: blob_esnap_clone_inflate ...passed
00:06:06.977    Test: blob_esnap_clone_decouple ...passed
00:06:06.977    Test: blob_esnap_clone_reload ...passed
00:06:06.977    Test: blob_esnap_hotplug ...passed
00:06:06.977  
00:06:06.977  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:06.977                suites     16     16    n/a      0        0
00:06:06.977                 tests    348    348    348      0        0
00:06:06.977               asserts  92605  92605  92605      0      n/a
00:06:06.977  
00:06:06.977  Elapsed time =   16.119 seconds
00:06:07.235   23:39:37	-- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:06:07.235  
00:06:07.235  
00:06:07.236       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.236       http://cunit.sourceforge.net/
00:06:07.236  
00:06:07.236  
00:06:07.236  Suite: blob_bdev
00:06:07.236    Test: create_bs_dev ...passed
00:06:07.236    Test: create_bs_dev_ro ...[2024-12-13 23:39:37.739364] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:06:07.236  passed
00:06:07.236    Test: create_bs_dev_rw ...passed
00:06:07.236    Test: claim_bs_dev ...[2024-12-13 23:39:37.740522] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:06:07.236  passed
00:06:07.236    Test: claim_bs_dev_ro ...passed
00:06:07.236    Test: deferred_destroy_refs ...passed
00:06:07.236    Test: deferred_destroy_channels ...passed
00:06:07.236    Test: deferred_destroy_threads ...passed
00:06:07.236  
00:06:07.236  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.236                suites      1      1    n/a      0        0
00:06:07.236                 tests      8      8      8      0        0
00:06:07.236               asserts    119    119    119      0      n/a
00:06:07.236  
00:06:07.236  Elapsed time =    0.001 seconds
00:06:07.236   23:39:37	-- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:06:07.236  
00:06:07.236  
00:06:07.236       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.236       http://cunit.sourceforge.net/
00:06:07.236  
00:06:07.236  
00:06:07.236  Suite: tree
00:06:07.236    Test: blobfs_tree_op_test ...passed
00:06:07.236  
00:06:07.236  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.236                suites      1      1    n/a      0        0
00:06:07.236                 tests      1      1      1      0        0
00:06:07.236               asserts     27     27     27      0      n/a
00:06:07.236  
00:06:07.236  Elapsed time =    0.000 seconds
00:06:07.236   23:39:37	-- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:06:07.236  
00:06:07.236  
00:06:07.236       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.236       http://cunit.sourceforge.net/
00:06:07.236  
00:06:07.236  
00:06:07.236  Suite: blobfs_async_ut
00:06:07.236    Test: fs_init ...passed
00:06:07.236    Test: fs_open ...passed
00:06:07.236    Test: fs_create ...passed
00:06:07.236    Test: fs_truncate ...passed
00:06:07.236    Test: fs_rename ...[2024-12-13 23:39:37.945034] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:06:07.236  passed
00:06:07.236    Test: fs_rw_async ...passed
00:06:07.494    Test: fs_writev_readv_async ...passed
00:06:07.494    Test: tree_find_buffer_ut ...passed
00:06:07.494    Test: channel_ops ...passed
00:06:07.494    Test: channel_ops_sync ...passed
00:06:07.494  
00:06:07.494  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.494                suites      1      1    n/a      0        0
00:06:07.494                 tests     10     10     10      0        0
00:06:07.494               asserts    292    292    292      0      n/a
00:06:07.494  
00:06:07.494  Elapsed time =    0.192 seconds
00:06:07.494   23:39:38	-- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:06:07.494  
00:06:07.494  
00:06:07.494       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.494       http://cunit.sourceforge.net/
00:06:07.494  
00:06:07.494  
00:06:07.494  Suite: blobfs_sync_ut
00:06:07.494    Test: cache_read_after_write ...[2024-12-13 23:39:38.130494] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:06:07.494  passed
00:06:07.494    Test: file_length ...passed
00:06:07.494    Test: append_write_to_extend_blob ...passed
00:06:07.494    Test: partial_buffer ...passed
00:06:07.494    Test: cache_write_null_buffer ...passed
00:06:07.754    Test: fs_create_sync ...passed
00:06:07.754    Test: fs_rename_sync ...passed
00:06:07.754    Test: cache_append_no_cache ...passed
00:06:07.754    Test: fs_delete_file_without_close ...passed
00:06:07.754  
00:06:07.754  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.754                suites      1      1    n/a      0        0
00:06:07.754                 tests      9      9      9      0        0
00:06:07.754               asserts    345    345    345      0      n/a
00:06:07.754  
00:06:07.754  Elapsed time =    0.417 seconds
00:06:07.754   23:39:38	-- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:06:07.754  
00:06:07.754  
00:06:07.754       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.754       http://cunit.sourceforge.net/
00:06:07.754  
00:06:07.754  
00:06:07.754  Suite: blobfs_bdev_ut
00:06:07.754    Test: spdk_blobfs_bdev_detect_test ...[2024-12-13 23:39:38.348715] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:06:07.754  passed
00:06:07.754    Test: spdk_blobfs_bdev_create_test ...passed
00:06:07.754    Test: spdk_blobfs_bdev_mount_test ...passed
00:06:07.754  
00:06:07.754  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.754                suites      1      1    n/a      0        0
00:06:07.754                 tests      3      3      3      0        0
00:06:07.754               asserts      9      9      9      0      n/a
00:06:07.754  
00:06:07.754  Elapsed time =    0.001 seconds
00:06:07.754  [2024-12-13 23:39:38.349101] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:06:07.754  ************************************
00:06:07.754  END TEST unittest_blob_blobfs
00:06:07.754  ************************************
00:06:07.754  
00:06:07.754  real	0m16.885s
00:06:07.754  user	0m16.411s
00:06:07.754  sys	0m0.690s
00:06:07.754   23:39:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:07.754   23:39:38	-- common/autotest_common.sh@10 -- # set +x
00:06:07.754   23:39:38	-- unit/unittest.sh@208 -- # run_test unittest_event unittest_event
00:06:07.754   23:39:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:07.754   23:39:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:07.754   23:39:38	-- common/autotest_common.sh@10 -- # set +x
00:06:07.754  ************************************
00:06:07.754  START TEST unittest_event
00:06:07.754  ************************************
00:06:07.754   23:39:38	-- common/autotest_common.sh@1114 -- # unittest_event
00:06:07.754   23:39:38	-- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:06:07.754  
00:06:07.754  
00:06:07.754       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.754       http://cunit.sourceforge.net/
00:06:07.754  
00:06:07.754  
00:06:07.754  Suite: app_suite
00:06:07.754    Test: test_spdk_app_parse_args ...app_ut [options]
00:06:07.754  options:
00:06:07.754   -c, --config <config>     JSON config file (default none)
00:06:07.754       --json <config>       JSON config file (default none)
00:06:07.754       --json-ignore-init-errors
00:06:07.754                             don't exit on invalid config entry
00:06:07.754   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:06:07.754   -g, --single-file-segments
00:06:07.754                             force creating just one hugetlbfs file
00:06:07.754   -h, --help                show this usage
00:06:07.754   -i, --shm-id <id>         shared memory ID (optional)
00:06:07.754   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:06:07.754       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:06:07.754                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:06:07.754                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:06:07.754                             Within the group, '-' is used for range separator,
00:06:07.754                             ',' is used for single number separator.
00:06:07.754                             '( )' can be omitted for single element group,
00:06:07.754                             '@' can be omitted if cpus and lcores have the same value
00:06:07.754   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:06:07.754   -p, --main-core <id>      main (primary) core for DPDK
00:06:07.754   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:06:07.754   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:06:07.754       --disable-cpumask-locks    Disable CPU core lock files.
00:06:07.754       --silence-noticelog   disable notice level logging to stderr
00:06:07.754       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:06:07.754   -u, --no-pci              disable PCI access
00:06:07.754       --wait-for-rpc        wait for RPCs to initialize subsystems
00:06:07.754       --max-delay <num>     maximum reactor delay (in microseconds)
00:06:07.754   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:06:07.754   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)app_ut: invalid option -- 'z'
00:06:07.754  
00:06:07.754   -R, --huge-unlink         unlink huge files after initialization
00:06:07.754   -v, --version             print SPDK version
00:06:07.754       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:06:07.754       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:06:07.754       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:06:07.754       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:06:07.754                                   Tracepoints vary in size and can use more than one trace entry.
00:06:07.754       --rpcs-allowed	   comma-separated list of permitted RPCS
00:06:07.754       --env-context         Opaque context for use of the env implementation
00:06:07.754       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:06:07.754       --no-huge             run without using hugepages
00:06:07.754   -L, --logflag <flag>    enable log flag (all, json_util, log, rpc, thread, trace)
00:06:07.754   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:06:07.754                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:06:07.754                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:06:07.754                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:06:07.754                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:06:07.754       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:06:07.754  app_ut [options]
00:06:07.754  options:
00:06:07.754   -c, --config <config>     JSON config file (default none)
00:06:07.754       --json <config>       JSON config file (default none)
00:06:07.754       --json-ignore-init-errors
00:06:07.754                             don't exit on invalid config entry
00:06:07.754   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:06:07.755   -g, --single-file-segments
00:06:07.755                             force creating just one hugetlbfs file
00:06:07.755   -h, --help                show this usage
00:06:07.755   -i, --shm-id <id>         shared memory ID (optional)
00:06:07.755   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:06:07.755       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:06:07.755                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:06:07.755                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:06:07.755                             Within the group, '-' is used for range separator,
00:06:07.755                             ',' is used for single number separator.
00:06:07.755                             '( )' can be omitted for single element group,
00:06:07.755                             '@' can be omitted if cpus and lcores have the same value
00:06:07.755   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:06:07.755   -p, --main-core <id>      main (primary) core for DPDK
00:06:07.755   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:06:07.755   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:06:07.755       --disable-cpumask-locks    Disable CPU core lock files.
00:06:07.755       --silence-noticelog   disable notice level logging to stderr
00:06:07.755       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:06:07.755   -u, --no-pci              disable PCI access
00:06:07.755       --wait-for-rpc        wait for RPCs to initialize subsystems
00:06:07.755       --max-delay <num>     maximum reactor delay (in microseconds)
00:06:07.755   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:06:07.755   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:06:07.755   -R, --huge-unlink         unlink huge files after initialization
00:06:07.755   -v, --version             print SPDK version
00:06:07.755       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:06:07.755       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:06:07.755       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:06:07.755       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:06:07.755                                   Tracepoints vary in size and can use more than one trace entry.
00:06:07.755       --rpcs-allowed	   comma-separated list of permitted RPCS
00:06:07.755       --env-context         Opaque context for use of the env implementation
00:06:07.755       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:06:07.755       --no-huge             run without using hugepages
00:06:07.755   -L, --logflag <flag>    enable log flag (allapp_ut: unrecognized option '--test-long-opt'
00:06:07.755  , json_util, log, rpc, thread, trace)
00:06:07.755   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:06:07.755                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:06:07.755                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:06:07.755                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:06:07.755                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:06:07.755       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:06:07.755  [2024-12-13 23:39:38.435345] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:06:07.755  app_ut [options]
00:06:07.755  options:
00:06:07.755   -c, --config <config>     JSON config file (default none)
00:06:07.755       --json <config>       JSON config file (default none)
00:06:07.755       --json-ignore-init-errors
00:06:07.755                             don't exit on invalid config entry
00:06:07.755   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:06:07.755   -g, --single-file-segments
00:06:07.755                             force creating just one hugetlbfs file
00:06:07.755   -h, --help                show this usage
00:06:07.755   -i, --shm-id <id>         shared memory ID (optional)
00:06:07.755   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:06:07.755       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:06:07.755                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:06:07.755                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:06:07.755                             Within the group, '-' is used for range separator,
00:06:07.755                             ',' is used for single number separator.
00:06:07.755                             '( )' can be omitted for single element group,
00:06:07.755                             '@' can be omitted if cpus and lcores have the same value
00:06:07.755   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:06:07.755   -p, --main-core <id>      main (primary) core for DPDK
00:06:07.755   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:06:07.755   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:06:07.755       --disable-cpumask-locks    Disable CPU core lock files.
00:06:07.755       --silence-noticelog   disable notice level logging to stderr
00:06:07.755       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:06:07.755   -u, --no-pci              disable PCI access
00:06:07.755       --wait-for-rpc        wait for RPCs to initialize subsystems
00:06:07.755       --max-delay <num>     maximum reactor delay (in microseconds)
00:06:07.755   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:06:07.755   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:06:07.755   -R, --huge-unlink         unlink huge files after initialization
00:06:07.755   -v, --version             print SPDK version
00:06:07.755       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:06:07.755       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:06:07.755       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:06:07.755       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:06:07.755                                   Tracepoints vary in size and can use more than one trace entry.
00:06:07.755       --rpcs-allowed	   comma-separated list of permitted RPCS
00:06:07.755       --env-context         Opaque context for use of the env implementation
00:06:07.755       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:06:07.755       --no-huge             run without using hugepages
00:06:07.755   -L, --logflag <flag>    enable log flag (all, json_util, log, rpc, thread, trace)
00:06:07.755   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:06:07.755                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:06:07.755                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:06:07.755                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:06:07.755                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:06:07.755       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:06:07.755  passed
00:06:07.755  
00:06:07.755  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.755                suites      1      1    n/a      0        0
00:06:07.755                 tests      1      1      1      0        0
00:06:07.755               asserts      8      8      8      0      n/a
00:06:07.755  
00:06:07.755  Elapsed time =    0.001 seconds
00:06:07.755  [2024-12-13 23:39:38.435629] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:06:07.755  [2024-12-13 23:39:38.435823] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:06:07.755   23:39:38	-- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:06:07.755  
00:06:07.755  
00:06:07.755       CUnit - A unit testing framework for C - Version 2.1-3
00:06:07.755       http://cunit.sourceforge.net/
00:06:07.755  
00:06:07.755  
00:06:07.755  Suite: app_suite
00:06:07.755    Test: test_create_reactor ...passed
00:06:07.755    Test: test_init_reactors ...passed
00:06:07.755    Test: test_event_call ...passed
00:06:07.755    Test: test_schedule_thread ...passed
00:06:07.755    Test: test_reschedule_thread ...passed
00:06:07.755    Test: test_bind_thread ...passed
00:06:07.755    Test: test_for_each_reactor ...passed
00:06:07.755    Test: test_reactor_stats ...passed
00:06:07.755    Test: test_scheduler ...passed
00:06:07.755    Test: test_governor ...passed
00:06:07.755  
00:06:07.755  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:07.755                suites      1      1    n/a      0        0
00:06:07.755                 tests     10     10     10      0        0
00:06:07.755               asserts    344    344    344      0      n/a
00:06:07.755  
00:06:07.755  Elapsed time =    0.016 seconds
00:06:08.014  
00:06:08.014  real	0m0.078s
00:06:08.014  user	0m0.060s
00:06:08.014  sys	0m0.019s
00:06:08.014   23:39:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.014   23:39:38	-- common/autotest_common.sh@10 -- # set +x
00:06:08.014  ************************************
00:06:08.014  END TEST unittest_event
00:06:08.014  ************************************
00:06:08.014    23:39:38	-- unit/unittest.sh@209 -- # uname -s
00:06:08.014   23:39:38	-- unit/unittest.sh@209 -- # '[' Linux = Linux ']'
00:06:08.014   23:39:38	-- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl
00:06:08.014   23:39:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.014   23:39:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.014   23:39:38	-- common/autotest_common.sh@10 -- # set +x
00:06:08.014  ************************************
00:06:08.014  START TEST unittest_ftl
00:06:08.014  ************************************
00:06:08.014   23:39:38	-- common/autotest_common.sh@1114 -- # unittest_ftl
00:06:08.014   23:39:38	-- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut
00:06:08.014  
00:06:08.014  
00:06:08.014       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.014       http://cunit.sourceforge.net/
00:06:08.014  
00:06:08.014  
00:06:08.014  Suite: ftl_band_suite
00:06:08.014    Test: test_band_block_offset_from_addr_base ...passed
00:06:08.014    Test: test_band_block_offset_from_addr_offset ...passed
00:06:08.014    Test: test_band_addr_from_block_offset ...passed
00:06:08.014    Test: test_band_set_addr ...passed
00:06:08.014    Test: test_invalidate_addr ...passed
00:06:08.273    Test: test_next_xfer_addr ...passed
00:06:08.273  
00:06:08.273  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.273                suites      1      1    n/a      0        0
00:06:08.273                 tests      6      6      6      0        0
00:06:08.273               asserts  30356  30356  30356      0      n/a
00:06:08.273  
00:06:08.273  Elapsed time =    0.196 seconds
00:06:08.273   23:39:38	-- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut
00:06:08.273  
00:06:08.273  
00:06:08.273       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.273       http://cunit.sourceforge.net/
00:06:08.273  
00:06:08.273  
00:06:08.273  Suite: ftl_bitmap
00:06:08.273    Test: test_ftl_bitmap_create ...[2024-12-13 23:39:38.835889] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes
00:06:08.273  passed
00:06:08.273    Test: test_ftl_bitmap_get ...passed
00:06:08.273    Test: test_ftl_bitmap_set ...passed
00:06:08.273    Test: test_ftl_bitmap_clear ...passed
00:06:08.273    Test: test_ftl_bitmap_find_first_set ...passed
00:06:08.273    Test: test_ftl_bitmap_find_first_clear ...passed
00:06:08.273    Test: test_ftl_bitmap_count_set ...passed
00:06:08.273  
00:06:08.273  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.273                suites      1      1    n/a      0        0
00:06:08.273                 tests      7      7      7      0        0
00:06:08.273               asserts    137    137    137      0      n/a
00:06:08.273  
00:06:08.273  Elapsed time =    0.001 seconds
00:06:08.273  [2024-12-13 23:39:38.836213] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes
00:06:08.273   23:39:38	-- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut
00:06:08.273  
00:06:08.273  
00:06:08.273       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.273       http://cunit.sourceforge.net/
00:06:08.273  
00:06:08.273  
00:06:08.273  Suite: ftl_io_suite
00:06:08.273    Test: test_completion ...passed
00:06:08.273    Test: test_multiple_ios ...passed
00:06:08.273  
00:06:08.273  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.273                suites      1      1    n/a      0        0
00:06:08.273                 tests      2      2      2      0        0
00:06:08.273               asserts     47     47     47      0      n/a
00:06:08.273  
00:06:08.273  Elapsed time =    0.003 seconds
00:06:08.273   23:39:38	-- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut
00:06:08.273  
00:06:08.273  
00:06:08.273       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.274       http://cunit.sourceforge.net/
00:06:08.274  
00:06:08.274  
00:06:08.274  Suite: ftl_mngt
00:06:08.274    Test: test_next_step ...passed
00:06:08.274    Test: test_continue_step ...passed
00:06:08.274    Test: test_get_func_and_step_cntx_alloc ...passed
00:06:08.274    Test: test_fail_step ...passed
00:06:08.274    Test: test_mngt_call_and_call_rollback ...passed
00:06:08.274    Test: test_nested_process_failure ...passed
00:06:08.274  
00:06:08.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.274                suites      1      1    n/a      0        0
00:06:08.274                 tests      6      6      6      0        0
00:06:08.274               asserts    176    176    176      0      n/a
00:06:08.274  
00:06:08.274  Elapsed time =    0.001 seconds
00:06:08.274   23:39:38	-- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut
00:06:08.274  
00:06:08.274  
00:06:08.274       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.274       http://cunit.sourceforge.net/
00:06:08.274  
00:06:08.274  
00:06:08.274  Suite: ftl_mempool
00:06:08.274    Test: test_ftl_mempool_create ...passed
00:06:08.274    Test: test_ftl_mempool_get_put ...passed
00:06:08.274  
00:06:08.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.274                suites      1      1    n/a      0        0
00:06:08.274                 tests      2      2      2      0        0
00:06:08.274               asserts     36     36     36      0      n/a
00:06:08.274  
00:06:08.274  Elapsed time =    0.000 seconds
00:06:08.274   23:39:38	-- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut
00:06:08.274  
00:06:08.274  
00:06:08.274       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.274       http://cunit.sourceforge.net/
00:06:08.274  
00:06:08.274  
00:06:08.274  Suite: ftl_addr64_suite
00:06:08.274    Test: test_addr_cached ...passed
00:06:08.274  
00:06:08.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.274                suites      1      1    n/a      0        0
00:06:08.274                 tests      1      1      1      0        0
00:06:08.274               asserts   1536   1536   1536      0      n/a
00:06:08.274  
00:06:08.274  Elapsed time =    0.000 seconds
00:06:08.274   23:39:38	-- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut
00:06:08.274  
00:06:08.274  
00:06:08.274       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.274       http://cunit.sourceforge.net/
00:06:08.274  
00:06:08.274  
00:06:08.274  Suite: ftl_sb
00:06:08.274    Test: test_sb_crc_v2 ...passed
00:06:08.274    Test: test_sb_crc_v3 ...passed
00:06:08.274    Test: test_sb_v3_md_layout ...[2024-12-13 23:39:38.984600] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions
00:06:08.274  passed
00:06:08.274    Test: test_sb_v5_md_layout ...[2024-12-13 23:39:38.984956] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:06:08.274  [2024-12-13 23:39:38.985028] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:06:08.274  [2024-12-13 23:39:38.985078] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:06:08.274  [2024-12-13 23:39:38.985118] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:06:08.274  [2024-12-13 23:39:38.985209] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found
00:06:08.274  [2024-12-13 23:39:38.985252] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:06:08.274  [2024-12-13 23:39:38.985372] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:06:08.274  [2024-12-13 23:39:38.985460] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:06:08.274  [2024-12-13 23:39:38.985510] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:06:08.274  [2024-12-13 23:39:38.985563] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:06:08.274  passed
00:06:08.274  
00:06:08.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.274                suites      1      1    n/a      0        0
00:06:08.274                 tests      4      4      4      0        0
00:06:08.274               asserts    148    148    148      0      n/a
00:06:08.274  
00:06:08.274  Elapsed time =    0.002 seconds
00:06:08.274   23:39:38	-- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut
00:06:08.533  
00:06:08.533  
00:06:08.533       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.533       http://cunit.sourceforge.net/
00:06:08.533  
00:06:08.533  
00:06:08.533  Suite: ftl_layout_upgrade
00:06:08.533    Test: test_l2p_upgrade ...passed
00:06:08.533  
00:06:08.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.533                suites      1      1    n/a      0        0
00:06:08.533                 tests      1      1      1      0        0
00:06:08.533               asserts    140    140    140      0      n/a
00:06:08.533  
00:06:08.533  Elapsed time =    0.001 seconds
00:06:08.533  
00:06:08.533  real	0m0.482s
00:06:08.533  user	0m0.227s
00:06:08.533  sys	0m0.255s
00:06:08.533   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.533   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.533  ************************************
00:06:08.533  END TEST unittest_ftl
00:06:08.533  ************************************
00:06:08.533   23:39:39	-- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:06:08.533   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.533   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.533   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.533  ************************************
00:06:08.533  START TEST unittest_accel
00:06:08.533  ************************************
00:06:08.533   23:39:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:06:08.533  
00:06:08.533  
00:06:08.533       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.533       http://cunit.sourceforge.net/
00:06:08.533  
00:06:08.533  
00:06:08.533  Suite: accel_sequence
00:06:08.533    Test: test_sequence_fill_copy ...passed
00:06:08.533    Test: test_sequence_abort ...passed
00:06:08.533    Test: test_sequence_append_error ...passed
00:06:08.533    Test: test_sequence_completion_error ...[2024-12-13 23:39:39.107564] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7efd9f2c27c0
00:06:08.533  [2024-12-13 23:39:39.108350] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7efd9f2c27c0
00:06:08.533  [2024-12-13 23:39:39.108581] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7efd9f2c27c0
00:06:08.533  [2024-12-13 23:39:39.108809] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7efd9f2c27c0
00:06:08.533  passed
00:06:08.533    Test: test_sequence_decompress ...passed
00:06:08.533    Test: test_sequence_reverse ...passed
00:06:08.533    Test: test_sequence_copy_elision ...passed
00:06:08.533    Test: test_sequence_accel_buffers ...passed
00:06:08.533    Test: test_sequence_memory_domain ...[2024-12-13 23:39:39.121767] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:06:08.533  [2024-12-13 23:39:39.122092] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98
00:06:08.533  passed
00:06:08.533    Test: test_sequence_module_memory_domain ...passed
00:06:08.533    Test: test_sequence_crypto ...passed
00:06:08.533    Test: test_sequence_driver ...[2024-12-13 23:39:39.129769] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7efd9e69a7c0 using driver: ut
00:06:08.533  [2024-12-13 23:39:39.130026] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7efd9e69a7c0 through driver: ut
00:06:08.533  passed
00:06:08.533    Test: test_sequence_same_iovs ...passed
00:06:08.533    Test: test_sequence_crc32 ...passed
00:06:08.533  Suite: accel
00:06:08.533    Test: test_spdk_accel_task_complete ...passed
00:06:08.533    Test: test_get_task ...passed
00:06:08.533    Test: test_spdk_accel_submit_copy ...passed
00:06:08.533    Test: test_spdk_accel_submit_dualcast ...[2024-12-13 23:39:39.135662] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:06:08.533  [2024-12-13 23:39:39.135876] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:06:08.533  passed
00:06:08.533    Test: test_spdk_accel_submit_compare ...passed
00:06:08.533    Test: test_spdk_accel_submit_fill ...passed
00:06:08.533    Test: test_spdk_accel_submit_crc32c ...passed
00:06:08.533    Test: test_spdk_accel_submit_crc32cv ...passed
00:06:08.533    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:06:08.533    Test: test_spdk_accel_submit_xor ...passed
00:06:08.533    Test: test_spdk_accel_module_find_by_name ...passed
00:06:08.533    Test: test_spdk_accel_module_register ...passed
00:06:08.533  
00:06:08.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.533                suites      2      2    n/a      0        0
00:06:08.533                 tests     26     26     26      0        0
00:06:08.533               asserts    831    831    831      0      n/a
00:06:08.533  
00:06:08.533  Elapsed time =    0.039 seconds
00:06:08.533  
00:06:08.533  real	0m0.079s
00:06:08.533  user	0m0.031s
00:06:08.533  sys	0m0.046s
00:06:08.533   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.533   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.533  ************************************
00:06:08.533  END TEST unittest_accel
00:06:08.533  ************************************
00:06:08.533   23:39:39	-- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:06:08.533   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.533   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.533   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.533  ************************************
00:06:08.533  START TEST unittest_ioat
00:06:08.533  ************************************
00:06:08.533   23:39:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:06:08.533  
00:06:08.533  
00:06:08.533       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.533       http://cunit.sourceforge.net/
00:06:08.533  
00:06:08.533  
00:06:08.533  Suite: ioat
00:06:08.533    Test: ioat_state_check ...passed
00:06:08.533  
00:06:08.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.533                suites      1      1    n/a      0        0
00:06:08.533                 tests      1      1      1      0        0
00:06:08.533               asserts     32     32     32      0      n/a
00:06:08.533  
00:06:08.533  Elapsed time =    0.000 seconds
00:06:08.533  
00:06:08.533  real	0m0.030s
00:06:08.533  user	0m0.022s
00:06:08.533  sys	0m0.009s
00:06:08.533   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.533   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.533  ************************************
00:06:08.533  END TEST unittest_ioat
00:06:08.533  ************************************
00:06:08.793   23:39:39	-- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:08.793   23:39:39	-- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:06:08.793   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.793   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.793   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.793  ************************************
00:06:08.793  START TEST unittest_idxd_user
00:06:08.793  ************************************
00:06:08.793   23:39:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:06:08.793  
00:06:08.793  
00:06:08.793       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.793       http://cunit.sourceforge.net/
00:06:08.793  
00:06:08.793  
00:06:08.793  Suite: idxd_user
00:06:08.793    Test: test_idxd_wait_cmd ...[2024-12-13 23:39:39.302548] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:06:08.794  [2024-12-13 23:39:39.302971] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:06:08.794  passed
00:06:08.794    Test: test_idxd_reset_dev ...[2024-12-13 23:39:39.303454] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:06:08.794  [2024-12-13 23:39:39.303665] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:06:08.794  passed
00:06:08.794    Test: test_idxd_group_config ...passed
00:06:08.794    Test: test_idxd_wq_config ...passed
00:06:08.794  
00:06:08.794  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.794                suites      1      1    n/a      0        0
00:06:08.794                 tests      4      4      4      0        0
00:06:08.794               asserts     20     20     20      0      n/a
00:06:08.794  
00:06:08.794  Elapsed time =    0.001 seconds
00:06:08.794  
00:06:08.794  real	0m0.026s
00:06:08.794  user	0m0.012s
00:06:08.794  sys	0m0.014s
00:06:08.794   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.794   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.794  ************************************
00:06:08.794  END TEST unittest_idxd_user
00:06:08.794  ************************************
00:06:08.794   23:39:39	-- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi
00:06:08.794   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.794   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.794   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:08.794  ************************************
00:06:08.794  START TEST unittest_iscsi
00:06:08.794  ************************************
00:06:08.794   23:39:39	-- common/autotest_common.sh@1114 -- # unittest_iscsi
00:06:08.794   23:39:39	-- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:06:08.794  
00:06:08.794  
00:06:08.794       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.794       http://cunit.sourceforge.net/
00:06:08.794  
00:06:08.794  
00:06:08.794  Suite: conn_suite
00:06:08.794    Test: read_task_split_in_order_case ...passed
00:06:08.794    Test: read_task_split_reverse_order_case ...passed
00:06:08.794    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:06:08.794    Test: process_non_read_task_completion_test ...passed
00:06:08.794    Test: free_tasks_on_connection ...passed
00:06:08.794    Test: free_tasks_with_queued_datain ...passed
00:06:08.794    Test: abort_queued_datain_task_test ...passed
00:06:08.794    Test: abort_queued_datain_tasks_test ...passed
00:06:08.794  
00:06:08.794  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.794                suites      1      1    n/a      0        0
00:06:08.794                 tests      8      8      8      0        0
00:06:08.794               asserts    230    230    230      0      n/a
00:06:08.794  
00:06:08.794  Elapsed time =    0.000 seconds
00:06:08.794   23:39:39	-- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:06:08.794  
00:06:08.794  
00:06:08.794       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.794       http://cunit.sourceforge.net/
00:06:08.794  
00:06:08.794  
00:06:08.794  Suite: iscsi_suite
00:06:08.794    Test: param_negotiation_test ...passed
00:06:08.794    Test: list_negotiation_test ...passed
00:06:08.794    Test: parse_valid_test ...passed
00:06:08.794    Test: parse_invalid_test ...[2024-12-13 23:39:39.416986] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found
00:06:08.794  [2024-12-13 23:39:39.417331] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found
00:06:08.794  [2024-12-13 23:39:39.417400] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key
00:06:08.794  [2024-12-13 23:39:39.417492] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:06:08.794  [2024-12-13 23:39:39.417695] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256
00:06:08.794  [2024-12-13 23:39:39.417792] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:06:08.794  [2024-12-13 23:39:39.417947] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B
00:06:08.794  passed
00:06:08.794  
00:06:08.794  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.794                suites      1      1    n/a      0        0
00:06:08.794                 tests      4      4      4      0        0
00:06:08.794               asserts    161    161    161      0      n/a
00:06:08.794  
00:06:08.794  Elapsed time =    0.006 seconds
00:06:08.794   23:39:39	-- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:06:08.794  
00:06:08.794  
00:06:08.794       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.794       http://cunit.sourceforge.net/
00:06:08.794  
00:06:08.794  
00:06:08.794  Suite: iscsi_target_node_suite
00:06:08.794    Test: add_lun_test_cases ...[2024-12-13 23:39:39.449025] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:06:08.794  [2024-12-13 23:39:39.449282] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:06:08.794  passed
00:06:08.794    Test: allow_any_allowed ...passed
00:06:08.794    Test: allow_ipv6_allowed ...passed[2024-12-13 23:39:39.449376] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:06:08.794  [2024-12-13 23:39:39.449419] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:06:08.794  [2024-12-13 23:39:39.449453] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:06:08.794  
00:06:08.794    Test: allow_ipv6_denied ...passed
00:06:08.794    Test: allow_ipv6_invalid ...passed
00:06:08.794    Test: allow_ipv4_allowed ...passed
00:06:08.794    Test: allow_ipv4_denied ...passed
00:06:08.794    Test: allow_ipv4_invalid ...passed
00:06:08.794    Test: node_access_allowed ...passed
00:06:08.794    Test: node_access_denied_by_empty_netmask ...passed
00:06:08.794    Test: node_access_multi_initiator_groups_cases ...passed
00:06:08.794    Test: allow_iscsi_name_multi_maps_case ...passed
00:06:08.794    Test: chap_param_test_cases ...passed
00:06:08.794  
00:06:08.794  [2024-12-13 23:39:39.449821] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:06:08.794  [2024-12-13 23:39:39.449871] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:06:08.794  [2024-12-13 23:39:39.449923] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:06:08.794  [2024-12-13 23:39:39.449957] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:06:08.794  [2024-12-13 23:39:39.449992] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:06:08.794  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.794                suites      1      1    n/a      0        0
00:06:08.794                 tests     13     13     13      0        0
00:06:08.794               asserts     50     50     50      0      n/a
00:06:08.794  
00:06:08.794  Elapsed time =    0.001 seconds
00:06:08.794   23:39:39	-- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:06:08.794  
00:06:08.794  
00:06:08.794       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.794       http://cunit.sourceforge.net/
00:06:08.794  
00:06:08.794  
00:06:08.794  Suite: iscsi_suite
00:06:08.794    Test: op_login_check_target_test ...[2024-12-13 23:39:39.478869] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied
00:06:08.794  passed
00:06:08.794    Test: op_login_session_normal_test ...[2024-12-13 23:39:39.479220] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:06:08.794  [2024-12-13 23:39:39.479299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:06:08.794  [2024-12-13 23:39:39.479364] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:06:08.794  [2024-12-13 23:39:39.479432] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:06:08.794  [2024-12-13 23:39:39.479561] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:06:08.794  [2024-12-13 23:39:39.479682] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:06:08.794  [2024-12-13 23:39:39.479747] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:06:08.794  passed
00:06:08.794    Test: maxburstlength_test ...[2024-12-13 23:39:39.480007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:06:08.794  [2024-12-13 23:39:39.480099] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:06:08.794  passed
00:06:08.794    Test: underflow_for_read_transfer_test ...passed
00:06:08.794    Test: underflow_for_zero_read_transfer_test ...passed
00:06:08.794    Test: underflow_for_request_sense_test ...passed
00:06:08.794    Test: underflow_for_check_condition_test ...passed
00:06:08.794    Test: add_transfer_task_test ...passed
00:06:08.794    Test: get_transfer_task_test ...passed
00:06:08.794    Test: del_transfer_task_test ...passed
00:06:08.794    Test: clear_all_transfer_tasks_test ...passed
00:06:08.794    Test: build_iovs_test ...passed
00:06:08.794    Test: build_iovs_with_md_test ...passed
00:06:08.794    Test: pdu_hdr_op_login_test ...[2024-12-13 23:39:39.481657] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error
00:06:08.794  [2024-12-13 23:39:39.481798] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:06:08.794  passed
00:06:08.795    Test: pdu_hdr_op_text_test ...[2024-12-13 23:39:39.481899] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:06:08.795  [2024-12-13 23:39:39.482020] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:06:08.795  [2024-12-13 23:39:39.482109] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:06:08.795  [2024-12-13 23:39:39.482158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:06:08.795  passed
00:06:08.795    Test: pdu_hdr_op_logout_test ...[2024-12-13 23:39:39.482248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:06:08.795  passed
00:06:08.795    Test: pdu_hdr_op_scsi_test ...[2024-12-13 23:39:39.482423] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:06:08.795  [2024-12-13 23:39:39.482468] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:06:08.795  [2024-12-13 23:39:39.482526] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:06:08.795  [2024-12-13 23:39:39.482638] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:06:08.795  [2024-12-13 23:39:39.482767] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:06:08.795  [2024-12-13 23:39:39.482962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:06:08.795  passed
00:06:08.795    Test: pdu_hdr_op_task_mgmt_test ...[2024-12-13 23:39:39.483093] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:06:08.795  [2024-12-13 23:39:39.483201] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:06:08.795  passed
00:06:08.795    Test: pdu_hdr_op_nopout_test ...[2024-12-13 23:39:39.483418] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:06:08.795  [2024-12-13 23:39:39.483535] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:06:08.795  [2024-12-13 23:39:39.483586] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:06:08.795  passed
00:06:08.795    Test: pdu_hdr_op_data_test ...[2024-12-13 23:39:39.483629] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:06:08.795  [2024-12-13 23:39:39.483667] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:06:08.795  [2024-12-13 23:39:39.483736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:06:08.795  [2024-12-13 23:39:39.483818] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:06:08.795  [2024-12-13 23:39:39.483888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:06:08.795  [2024-12-13 23:39:39.483961] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:06:08.795  passed
00:06:08.795    Test: empty_text_with_cbit_test ...[2024-12-13 23:39:39.484059] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:06:08.795  [2024-12-13 23:39:39.484106] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:06:08.795  passed
00:06:08.795    Test: pdu_payload_read_test ...[2024-12-13 23:39:39.486376] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:06:08.795  passed
00:06:08.795    Test: data_out_pdu_sequence_test ...passed
00:06:08.795    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:06:08.795  
00:06:08.795  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.795                suites      1      1    n/a      0        0
00:06:08.795                 tests     24     24     24      0        0
00:06:08.795               asserts 150253 150253 150253      0      n/a
00:06:08.795  
00:06:08.795  Elapsed time =    0.018 seconds
00:06:08.795   23:39:39	-- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:06:08.795  
00:06:08.795  
00:06:08.795       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.795       http://cunit.sourceforge.net/
00:06:08.795  
00:06:08.795  
00:06:08.795  Suite: init_grp_suite
00:06:08.795    Test: create_initiator_group_success_case ...passed
00:06:08.795    Test: find_initiator_group_success_case ...passed
00:06:08.795    Test: register_initiator_group_twice_case ...passed
00:06:08.795    Test: add_initiator_name_success_case ...passed
00:06:08.795    Test: add_initiator_name_fail_case ...[2024-12-13 23:39:39.523513] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:06:08.795  passed
00:06:08.795    Test: delete_all_initiator_names_success_case ...passed
00:06:08.795    Test: add_netmask_success_case ...passed
00:06:08.795    Test: add_netmask_fail_case ...[2024-12-13 23:39:39.523953] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:06:08.795  passed
00:06:08.795    Test: delete_all_netmasks_success_case ...passed
00:06:08.795    Test: initiator_name_overwrite_all_to_any_case ...passed
00:06:08.795    Test: netmask_overwrite_all_to_any_case ...passed
00:06:08.795    Test: add_delete_initiator_names_case ...passed
00:06:08.795    Test: add_duplicated_initiator_names_case ...passed
00:06:08.795    Test: delete_nonexisting_initiator_names_case ...passed
00:06:08.795    Test: add_delete_netmasks_case ...passed
00:06:08.795    Test: add_duplicated_netmasks_case ...passed
00:06:08.795    Test: delete_nonexisting_netmasks_case ...passed
00:06:08.795  
00:06:08.795  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.795                suites      1      1    n/a      0        0
00:06:08.795                 tests     17     17     17      0        0
00:06:08.795               asserts    108    108    108      0      n/a
00:06:08.795  
00:06:08.795  Elapsed time =    0.001 seconds
00:06:09.056   23:39:39	-- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:06:09.056  
00:06:09.056  
00:06:09.056       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.056       http://cunit.sourceforge.net/
00:06:09.056  
00:06:09.056  
00:06:09.056  Suite: portal_grp_suite
00:06:09.056    Test: portal_create_ipv4_normal_case ...passed
00:06:09.056    Test: portal_create_ipv6_normal_case ...passed
00:06:09.056    Test: portal_create_ipv4_wildcard_case ...passed
00:06:09.056    Test: portal_create_ipv6_wildcard_case ...passed
00:06:09.056    Test: portal_create_twice_case ...[2024-12-13 23:39:39.553724] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:06:09.056  passed
00:06:09.056    Test: portal_grp_register_unregister_case ...passed
00:06:09.057    Test: portal_grp_register_twice_case ...passed
00:06:09.057    Test: portal_grp_add_delete_case ...passed
00:06:09.057    Test: portal_grp_add_delete_twice_case ...passed
00:06:09.057  
00:06:09.057  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.057                suites      1      1    n/a      0        0
00:06:09.057                 tests      9      9      9      0        0
00:06:09.057               asserts     44     44     44      0      n/a
00:06:09.057  
00:06:09.057  Elapsed time =    0.003 seconds
00:06:09.057  
00:06:09.057  real	0m0.205s
00:06:09.057  user	0m0.121s
00:06:09.057  sys	0m0.087s
00:06:09.057   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:09.057  ************************************
00:06:09.057  END TEST unittest_iscsi
00:06:09.057   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.057  ************************************
00:06:09.057   23:39:39	-- unit/unittest.sh@219 -- # run_test unittest_json unittest_json
00:06:09.057   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:09.057   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:09.057   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.057  ************************************
00:06:09.057  START TEST unittest_json
00:06:09.057  ************************************
00:06:09.057   23:39:39	-- common/autotest_common.sh@1114 -- # unittest_json
00:06:09.057   23:39:39	-- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:06:09.057  
00:06:09.057  
00:06:09.057       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.057       http://cunit.sourceforge.net/
00:06:09.057  
00:06:09.057  
00:06:09.057  Suite: json
00:06:09.057    Test: test_parse_literal ...passed
00:06:09.057    Test: test_parse_string_simple ...passed
00:06:09.057    Test: test_parse_string_control_chars ...passed
00:06:09.057    Test: test_parse_string_utf8 ...passed
00:06:09.057    Test: test_parse_string_escapes_twochar ...passed
00:06:09.057    Test: test_parse_string_escapes_unicode ...passed
00:06:09.057    Test: test_parse_number ...passed
00:06:09.057    Test: test_parse_array ...passed
00:06:09.057    Test: test_parse_object ...passed
00:06:09.057    Test: test_parse_nesting ...passed
00:06:09.057    Test: test_parse_comment ...passed
00:06:09.057  
00:06:09.057  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.057                suites      1      1    n/a      0        0
00:06:09.057                 tests     11     11     11      0        0
00:06:09.057               asserts   1516   1516   1516      0      n/a
00:06:09.057  
00:06:09.057  Elapsed time =    0.001 seconds
00:06:09.057   23:39:39	-- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:06:09.057  
00:06:09.057  
00:06:09.057       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.057       http://cunit.sourceforge.net/
00:06:09.057  
00:06:09.057  
00:06:09.057  Suite: json
00:06:09.057    Test: test_strequal ...passed
00:06:09.057    Test: test_num_to_uint16 ...passed
00:06:09.057    Test: test_num_to_int32 ...passed
00:06:09.057    Test: test_num_to_uint64 ...passed
00:06:09.057    Test: test_decode_object ...passed
00:06:09.057    Test: test_decode_array ...passed
00:06:09.057    Test: test_decode_bool ...passed
00:06:09.057    Test: test_decode_uint16 ...passed
00:06:09.057    Test: test_decode_int32 ...passed
00:06:09.057    Test: test_decode_uint32 ...passed
00:06:09.057    Test: test_decode_uint64 ...passed
00:06:09.057    Test: test_decode_string ...passed
00:06:09.057    Test: test_decode_uuid ...passed
00:06:09.057    Test: test_find ...passed
00:06:09.057    Test: test_find_array ...passed
00:06:09.057    Test: test_iterating ...passed
00:06:09.057    Test: test_free_object ...passed
00:06:09.057  
00:06:09.057  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.057                suites      1      1    n/a      0        0
00:06:09.057                 tests     17     17     17      0        0
00:06:09.057               asserts    236    236    236      0      n/a
00:06:09.057  
00:06:09.057  Elapsed time =    0.001 seconds
00:06:09.057   23:39:39	-- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:06:09.057  
00:06:09.057  
00:06:09.057       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.057       http://cunit.sourceforge.net/
00:06:09.057  
00:06:09.057  
00:06:09.057  Suite: json
00:06:09.057    Test: test_write_literal ...passed
00:06:09.057    Test: test_write_string_simple ...passed
00:06:09.057    Test: test_write_string_escapes ...passed
00:06:09.057    Test: test_write_string_utf16le ...passed
00:06:09.057    Test: test_write_number_int32 ...passed
00:06:09.057    Test: test_write_number_uint32 ...passed
00:06:09.057    Test: test_write_number_uint128 ...passed
00:06:09.057    Test: test_write_string_number_uint128 ...passed
00:06:09.057    Test: test_write_number_int64 ...passed
00:06:09.057    Test: test_write_number_uint64 ...passed
00:06:09.057    Test: test_write_number_double ...passed
00:06:09.057    Test: test_write_uuid ...passed
00:06:09.057    Test: test_write_array ...passed
00:06:09.057    Test: test_write_object ...passed
00:06:09.057    Test: test_write_nesting ...passed
00:06:09.057    Test: test_write_val ...passed
00:06:09.057  
00:06:09.057  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.057                suites      1      1    n/a      0        0
00:06:09.057                 tests     16     16     16      0        0
00:06:09.057               asserts    918    918    918      0      n/a
00:06:09.057  
00:06:09.057  Elapsed time =    0.004 seconds
00:06:09.057   23:39:39	-- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:06:09.057  
00:06:09.057  
00:06:09.057       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.057       http://cunit.sourceforge.net/
00:06:09.057  
00:06:09.057  
00:06:09.057  Suite: jsonrpc
00:06:09.057    Test: test_parse_request ...passed
00:06:09.057    Test: test_parse_request_streaming ...passed
00:06:09.057  
00:06:09.057  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.057                suites      1      1    n/a      0        0
00:06:09.057                 tests      2      2      2      0        0
00:06:09.057               asserts    289    289    289      0      n/a
00:06:09.057  
00:06:09.057  Elapsed time =    0.004 seconds
00:06:09.057  ************************************
00:06:09.057  END TEST unittest_json
00:06:09.057  ************************************
00:06:09.057  
00:06:09.057  real	0m0.122s
00:06:09.057  user	0m0.074s
00:06:09.057  sys	0m0.047s
00:06:09.057   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:09.057   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.057   23:39:39	-- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc
00:06:09.057   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:09.057   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:09.057   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.316  ************************************
00:06:09.316  START TEST unittest_rpc
00:06:09.316  ************************************
00:06:09.316   23:39:39	-- common/autotest_common.sh@1114 -- # unittest_rpc
00:06:09.316   23:39:39	-- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:06:09.316  
00:06:09.316  
00:06:09.316       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.316       http://cunit.sourceforge.net/
00:06:09.316  
00:06:09.316  
00:06:09.316  Suite: rpc
00:06:09.316    Test: test_jsonrpc_handler ...passed
00:06:09.316    Test: test_spdk_rpc_is_method_allowed ...passed
00:06:09.316    Test: test_rpc_get_methods ...[2024-12-13 23:39:39.803572] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:06:09.316  passed
00:06:09.316    Test: test_rpc_spdk_get_version ...passed
00:06:09.316    Test: test_spdk_rpc_listen_close ...passed
00:06:09.316  
00:06:09.316  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.316                suites      1      1    n/a      0        0
00:06:09.316                 tests      5      5      5      0        0
00:06:09.316               asserts     20     20     20      0      n/a
00:06:09.316  
00:06:09.316  Elapsed time =    0.000 seconds
00:06:09.316  
00:06:09.316  real	0m0.027s
00:06:09.316  user	0m0.019s
00:06:09.316  sys	0m0.008s
00:06:09.316   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:09.316   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.316  ************************************
00:06:09.316  END TEST unittest_rpc
00:06:09.316  ************************************
00:06:09.316   23:39:39	-- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:06:09.316   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:09.316   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:09.316   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.316  ************************************
00:06:09.316  START TEST unittest_notify
00:06:09.316  ************************************
00:06:09.316   23:39:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:06:09.316  
00:06:09.316  
00:06:09.316       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.316       http://cunit.sourceforge.net/
00:06:09.316  
00:06:09.316  
00:06:09.316  Suite: app_suite
00:06:09.316    Test: notify ...passed
00:06:09.316  
00:06:09.316  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.316                suites      1      1    n/a      0        0
00:06:09.316                 tests      1      1      1      0        0
00:06:09.316               asserts     13     13     13      0      n/a
00:06:09.316  
00:06:09.316  Elapsed time =    0.000 seconds
00:06:09.316  
00:06:09.316  real	0m0.026s
00:06:09.316  user	0m0.012s
00:06:09.316  sys	0m0.014s
00:06:09.316   23:39:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:09.316   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.316  ************************************
00:06:09.316  END TEST unittest_notify
00:06:09.316  ************************************
00:06:09.316   23:39:39	-- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme
00:06:09.316   23:39:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:09.316   23:39:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:09.316   23:39:39	-- common/autotest_common.sh@10 -- # set +x
00:06:09.316  ************************************
00:06:09.316  START TEST unittest_nvme
00:06:09.316  ************************************
00:06:09.317   23:39:39	-- common/autotest_common.sh@1114 -- # unittest_nvme
00:06:09.317   23:39:39	-- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:06:09.317  
00:06:09.317  
00:06:09.317       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.317       http://cunit.sourceforge.net/
00:06:09.317  
00:06:09.317  
00:06:09.317  Suite: nvme
00:06:09.317    Test: test_opc_data_transfer ...passed
00:06:09.317    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:06:09.317    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:06:09.317    Test: test_trid_parse_and_compare ...[2024-12-13 23:39:39.940742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:06:09.317  passed
00:06:09.317    Test: test_trid_trtype_str ...passed
00:06:09.317    Test: test_trid_adrfam_str ...passed
00:06:09.317    Test: test_nvme_ctrlr_probe ...passed
00:06:09.317    Test: test_spdk_nvme_probe ...[2024-12-13 23:39:39.941055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:06:09.317  [2024-12-13 23:39:39.941164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:06:09.317  [2024-12-13 23:39:39.941211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:06:09.317  [2024-12-13 23:39:39.941249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value
00:06:09.317  [2024-12-13 23:39:39.941340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:06:09.317  [2024-12-13 23:39:39.941572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:06:09.317  [2024-12-13 23:39:39.941726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:06:09.317  [2024-12-13 23:39:39.941770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed
00:06:09.317  [2024-12-13 23:39:39.941880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:06:09.317  [2024-12-13 23:39:39.941938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed
00:06:09.317  passed
00:06:09.317    Test: test_spdk_nvme_connect ...[2024-12-13 23:39:39.942051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified
00:06:09.317  [2024-12-13 23:39:39.942467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:06:09.317  passed
00:06:09.317    Test: test_nvme_ctrlr_probe_internal ...[2024-12-13 23:39:39.942550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed
00:06:09.317  [2024-12-13 23:39:39.942700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:06:09.317  passed
00:06:09.317    Test: test_nvme_init_controllers ...[2024-12-13 23:39:39.942765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:06:09.317  [2024-12-13 23:39:39.942857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:06:09.317  passed
00:06:09.317    Test: test_nvme_driver_init ...[2024-12-13 23:39:39.942974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:06:09.317  [2024-12-13 23:39:39.943022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:06:09.576  [2024-12-13 23:39:40.057108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:06:09.576  [2024-12-13 23:39:40.057268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex
00:06:09.576  passed
00:06:09.576    Test: test_spdk_nvme_detach ...passed
00:06:09.576    Test: test_nvme_completion_poll_cb ...passed
00:06:09.576    Test: test_nvme_user_copy_cmd_complete ...passed
00:06:09.576    Test: test_nvme_allocate_request_null ...passed
00:06:09.576    Test: test_nvme_allocate_request ...passed
00:06:09.576    Test: test_nvme_free_request ...passed
00:06:09.576    Test: test_nvme_allocate_request_user_copy ...passed
00:06:09.576    Test: test_nvme_robust_mutex_init_shared ...passed
00:06:09.576    Test: test_nvme_request_check_timeout ...passed
00:06:09.576    Test: test_nvme_wait_for_completion ...passed
00:06:09.576    Test: test_spdk_nvme_parse_func ...passed
00:06:09.576    Test: test_spdk_nvme_detach_async ...passed
00:06:09.576    Test: test_nvme_parse_addr ...[2024-12-13 23:39:40.058130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL
00:06:09.576  passed
00:06:09.576  
00:06:09.576  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.576                suites      1      1    n/a      0        0
00:06:09.576                 tests     25     25     25      0        0
00:06:09.576               asserts    326    326    326      0      n/a
00:06:09.576  
00:06:09.576  Elapsed time =    0.007 seconds
00:06:09.576   23:39:40	-- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:06:09.576  
00:06:09.576  
00:06:09.576       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.576       http://cunit.sourceforge.net/
00:06:09.576  
00:06:09.576  
00:06:09.576  Suite: nvme_ctrlr
00:06:09.576    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-12-13 23:39:40.085881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-12-13 23:39:40.087737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-12-13 23:39:40.089076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-12-13 23:39:40.090341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-12-13 23:39:40.091677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  [2024-12-13 23:39:40.092915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-13 23:39:40.094184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-13 23:39:40.095413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-12-13 23:39:40.097995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  [2024-12-13 23:39:40.100453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-13 23:39:40.101720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:06:09.576    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-12-13 23:39:40.104310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  [2024-12-13 23:39:40.105635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-13 23:39:40.108160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:06:09.576    Test: test_nvme_ctrlr_init_delay ...[2024-12-13 23:39:40.110892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_alloc_io_qpair_rr_1 ...[2024-12-13 23:39:40.112370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  [2024-12-13 23:39:40.112607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs
00:06:09.576  [2024-12-13 23:39:40.112890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:06:09.576  [2024-12-13 23:39:40.113007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:06:09.576  [2024-12-13 23:39:40.113095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:06:09.576  passed
00:06:09.576    Test: test_ctrlr_get_default_ctrlr_opts ...passed
00:06:09.576    Test: test_ctrlr_get_default_io_qpair_opts ...passed
00:06:09.576    Test: test_alloc_io_qpair_wrr_1 ...[2024-12-13 23:39:40.113248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  passed
00:06:09.576    Test: test_alloc_io_qpair_wrr_2 ...[2024-12-13 23:39:40.113494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.576  [2024-12-13 23:39:40.113689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs
00:06:09.576  passed
00:06:09.576    Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-12-13 23:39:40.114079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size!
00:06:09.576  [2024-12-13 23:39:40.114318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed!
00:06:09.576  passed
00:06:09.576    Test: test_nvme_ctrlr_fail ...passed
00:06:09.576    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed
00:06:09.576    Test: test_nvme_ctrlr_set_supported_features ...passed
00:06:09.576    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:06:09.576    Test: test_nvme_ctrlr_test_active_ns ...[2024-12-13 23:39:40.114450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed!
00:06:09.576  [2024-12-13 23:39:40.114564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed!
00:06:09.576  [2024-12-13 23:39:40.114664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state.
00:06:09.576  [2024-12-13 23:39:40.115115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.835  passed
00:06:09.835    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:06:09.835    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:06:09.835    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:06:09.835    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-12-13 23:39:40.446115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.835  passed
00:06:09.835    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-12-13 23:39:40.453689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.835  passed
00:06:09.836    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-12-13 23:39:40.455096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  [2024-12-13 23:39:40.455407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:06:09.836  passed
00:06:09.836    Test: test_alloc_io_qpair_fail ...[2024-12-13 23:39:40.456793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  [2024-12-13 23:39:40.457071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_add_remove_process ...passed
00:06:09.836    Test: test_nvme_ctrlr_set_arbitration_feature ...passed
00:06:09.836    Test: test_nvme_ctrlr_set_state ...[2024-12-13 23:39:40.457391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-12-13 23:39:40.457613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-12-13 23:39:40.480937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_ns_mgmt ...[2024-12-13 23:39:40.526647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_reset ...[2024-12-13 23:39:40.528446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_aer_callback ...[2024-12-13 23:39:40.529032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-12-13 23:39:40.530702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:06:09.836    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:06:09.836    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-12-13 23:39:40.532676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:06:09.836    Test: test_nvme_ctrlr_ana_resize ...[2024-12-13 23:39:40.534222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:06:09.836    Test: test_nvme_transport_ctrlr_ready ...[2024-12-13 23:39:40.536149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1
00:06:09.836  [2024-12-13 23:39:40.536426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error)
00:06:09.836  passed
00:06:09.836    Test: test_nvme_ctrlr_disable ...[2024-12-13 23:39:40.536624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:06:09.836  passed
00:06:09.836  
00:06:09.836  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.836                suites      1      1    n/a      0        0
00:06:09.836                 tests     43     43     43      0        0
00:06:09.836               asserts  10418  10418  10418      0      n/a
00:06:09.836  
00:06:09.836  Elapsed time =    0.407 seconds
00:06:09.836   23:39:40	-- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:06:10.096  
00:06:10.096  
00:06:10.096       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.096       http://cunit.sourceforge.net/
00:06:10.096  
00:06:10.096  
00:06:10.096  Suite: nvme_ctrlr_cmd
00:06:10.096    Test: test_get_log_pages ...passed
00:06:10.096    Test: test_set_feature_cmd ...passed
00:06:10.096    Test: test_set_feature_ns_cmd ...passed
00:06:10.096    Test: test_get_feature_cmd ...passed
00:06:10.096    Test: test_get_feature_ns_cmd ...passed
00:06:10.096    Test: test_abort_cmd ...passed
00:06:10.096    Test: test_set_host_id_cmds ...[2024-12-13 23:39:40.578770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:06:10.096  passed
00:06:10.096    Test: test_io_cmd_raw_no_payload_build ...passed
00:06:10.096    Test: test_io_raw_cmd ...passed
00:06:10.096    Test: test_io_raw_cmd_with_md ...passed
00:06:10.096    Test: test_namespace_attach ...passed
00:06:10.096    Test: test_namespace_detach ...passed
00:06:10.096    Test: test_namespace_create ...passed
00:06:10.096    Test: test_namespace_delete ...passed
00:06:10.096    Test: test_doorbell_buffer_config ...passed
00:06:10.096    Test: test_format_nvme ...passed
00:06:10.096    Test: test_fw_commit ...passed
00:06:10.096    Test: test_fw_image_download ...passed
00:06:10.096    Test: test_sanitize ...passed
00:06:10.096    Test: test_directive ...passed
00:06:10.096    Test: test_nvme_request_add_abort ...passed
00:06:10.096    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:06:10.096    Test: test_nvme_ctrlr_cmd_identify ...passed
00:06:10.096    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:06:10.096  
00:06:10.096  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.096                suites      1      1    n/a      0        0
00:06:10.096                 tests     24     24     24      0        0
00:06:10.096               asserts    198    198    198      0      n/a
00:06:10.096  
00:06:10.096  Elapsed time =    0.001 seconds
00:06:10.096   23:39:40	-- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:06:10.096  
00:06:10.096  
00:06:10.096       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.096       http://cunit.sourceforge.net/
00:06:10.096  
00:06:10.096  
00:06:10.096  Suite: nvme_ctrlr_cmd
00:06:10.096    Test: test_geometry_cmd ...passed
00:06:10.096    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:06:10.096  
00:06:10.096  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.096                suites      1      1    n/a      0        0
00:06:10.096                 tests      2      2      2      0        0
00:06:10.096               asserts      7      7      7      0      n/a
00:06:10.096  
00:06:10.096  Elapsed time =    0.000 seconds
00:06:10.096   23:39:40	-- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:06:10.096  
00:06:10.096  
00:06:10.096       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.096       http://cunit.sourceforge.net/
00:06:10.096  
00:06:10.096  
00:06:10.096  Suite: nvme
00:06:10.096    Test: test_nvme_ns_construct ...passed
00:06:10.096    Test: test_nvme_ns_uuid ...passed
00:06:10.096    Test: test_nvme_ns_csi ...passed
00:06:10.096    Test: test_nvme_ns_data ...passed
00:06:10.096    Test: test_nvme_ns_set_identify_data ...passed
00:06:10.096    Test: test_spdk_nvme_ns_get_values ...passed
00:06:10.096    Test: test_spdk_nvme_ns_is_active ...passed
00:06:10.096    Test: spdk_nvme_ns_supports ...passed
00:06:10.096    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:06:10.096    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:06:10.096    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:06:10.096    Test: test_nvme_ns_find_id_desc ...passed
00:06:10.096  
00:06:10.096  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.096                suites      1      1    n/a      0        0
00:06:10.096                 tests     12     12     12      0        0
00:06:10.096               asserts     83     83     83      0      n/a
00:06:10.096  
00:06:10.096  Elapsed time =    0.001 seconds
00:06:10.096   23:39:40	-- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:06:10.096  
00:06:10.096  
00:06:10.096       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.096       http://cunit.sourceforge.net/
00:06:10.096  
00:06:10.096  
00:06:10.096  Suite: nvme_ns_cmd
00:06:10.096    Test: split_test ...passed
00:06:10.096    Test: split_test2 ...passed
00:06:10.096    Test: split_test3 ...passed
00:06:10.096    Test: split_test4 ...passed
00:06:10.096    Test: test_nvme_ns_cmd_flush ...passed
00:06:10.096    Test: test_nvme_ns_cmd_dataset_management ...passed
00:06:10.096    Test: test_nvme_ns_cmd_copy ...passed
00:06:10.096    Test: test_io_flags ...[2024-12-13 23:39:40.666646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:06:10.096  passed
00:06:10.096    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:06:10.096    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:06:10.096    Test: test_nvme_ns_cmd_reservation_register ...passed
00:06:10.096    Test: test_nvme_ns_cmd_reservation_release ...passed
00:06:10.096    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:06:10.096    Test: test_nvme_ns_cmd_reservation_report ...passed
00:06:10.096    Test: test_cmd_child_request ...passed
00:06:10.096    Test: test_nvme_ns_cmd_readv ...passed
00:06:10.096    Test: test_nvme_ns_cmd_read_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_writev ...[2024-12-13 23:39:40.668050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:06:10.096  passed
00:06:10.096    Test: test_nvme_ns_cmd_write_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_comparev ...passed
00:06:10.096    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:06:10.096    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:06:10.096    Test: test_nvme_ns_cmd_setup_request ...passed
00:06:10.096    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:06:10.096    Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-12-13 23:39:40.669903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:06:10.096  passed
00:06:10.096    Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-12-13 23:39:40.670009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:06:10.096  passed
00:06:10.096    Test: test_nvme_ns_cmd_verify ...passed
00:06:10.096    Test: test_nvme_ns_cmd_io_mgmt_send ...passed
00:06:10.096    Test: test_nvme_ns_cmd_io_mgmt_recv ...passed
00:06:10.096  
00:06:10.096  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.096                suites      1      1    n/a      0        0
00:06:10.096                 tests     32     32     32      0        0
00:06:10.096               asserts    550    550    550      0      n/a
00:06:10.096  
00:06:10.096  Elapsed time =    0.005 seconds
00:06:10.096   23:39:40	-- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:06:10.096  
00:06:10.096  
00:06:10.096       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.096       http://cunit.sourceforge.net/
00:06:10.096  
00:06:10.096  
00:06:10.096  Suite: nvme_ns_cmd
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:06:10.096    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:06:10.097    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:06:10.097    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:06:10.097  
00:06:10.097  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.097                suites      1      1    n/a      0        0
00:06:10.097                 tests     12     12     12      0        0
00:06:10.097               asserts    123    123    123      0      n/a
00:06:10.097  
00:06:10.097  Elapsed time =    0.001 seconds
00:06:10.097   23:39:40	-- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:06:10.097  
00:06:10.097  
00:06:10.097       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.097       http://cunit.sourceforge.net/
00:06:10.097  
00:06:10.097  
00:06:10.097  Suite: nvme_qpair
00:06:10.097    Test: test3 ...passed
00:06:10.097    Test: test_ctrlr_failed ...passed
00:06:10.097    Test: struct_packing ...passed
00:06:10.097    Test: test_nvme_qpair_process_completions ...[2024-12-13 23:39:40.718794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:06:10.097  [2024-12-13 23:39:40.719155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:06:10.097  [2024-12-13 23:39:40.719239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0
00:06:10.097  [2024-12-13 23:39:40.719358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1
00:06:10.097  passed
00:06:10.097    Test: test_nvme_completion_is_retry ...passed
00:06:10.097    Test: test_get_status_string ...passed
00:06:10.097    Test: test_nvme_qpair_add_cmd_error_injection ...passed
00:06:10.097    Test: test_nvme_qpair_submit_request ...passed
00:06:10.097    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:06:10.097    Test: test_nvme_qpair_manual_complete_request ...passed
00:06:10.097    Test: test_nvme_qpair_init_deinit ...passed
00:06:10.097    Test: test_nvme_get_sgl_print_info ...[2024-12-13 23:39:40.719837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:06:10.097  passed
00:06:10.097  
00:06:10.097  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.097                suites      1      1    n/a      0        0
00:06:10.097                 tests     12     12     12      0        0
00:06:10.097               asserts    154    154    154      0      n/a
00:06:10.097  
00:06:10.097  Elapsed time =    0.001 seconds
00:06:10.097   23:39:40	-- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:06:10.097  
00:06:10.097  
00:06:10.097       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.097       http://cunit.sourceforge.net/
00:06:10.097  
00:06:10.097  
00:06:10.097  Suite: nvme_pcie
00:06:10.097    Test: test_prp_list_append ...[2024-12-13 23:39:40.747111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:06:10.097  [2024-12-13 23:39:40.747585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:06:10.097  [2024-12-13 23:39:40.747662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:06:10.097  [2024-12-13 23:39:40.747992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:06:10.097  passed
00:06:10.097    Test: test_nvme_pcie_hotplug_monitor ...[2024-12-13 23:39:40.748145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:06:10.097  passed
00:06:10.097    Test: test_shadow_doorbell_update ...passed
00:06:10.097    Test: test_build_contig_hw_sgl_request ...passed
00:06:10.097    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:06:10.097    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:06:10.097    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:06:10.097    Test: test_nvme_pcie_qpair_build_contig_request ...[2024-12-13 23:39:40.748530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:06:10.097  passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_regs_get_set ...passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-12-13 23:39:40.748706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:06:10.097  passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-12-13 23:39:40.748856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:06:10.097  [2024-12-13 23:39:40.748949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:06:10.097  passed
00:06:10.097    Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed
00:06:10.097  
00:06:10.097  [2024-12-13 23:39:40.749039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:06:10.097  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.097                suites      1      1    n/a      0        0
00:06:10.097                 tests     14     14     14      0        0
00:06:10.097               asserts    235    235    235      0      n/a
00:06:10.097  
00:06:10.097  Elapsed time =    0.002 seconds
00:06:10.097   23:39:40	-- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:06:10.097  
00:06:10.097  
00:06:10.097       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.097       http://cunit.sourceforge.net/
00:06:10.097  
00:06:10.097  
00:06:10.097  Suite: nvme_ns_cmd
00:06:10.097    Test: nvme_poll_group_create_test ...passed
00:06:10.097    Test: nvme_poll_group_add_remove_test ...passed
00:06:10.097    Test: nvme_poll_group_process_completions ...passed
00:06:10.097    Test: nvme_poll_group_destroy_test ...passed
00:06:10.097    Test: nvme_poll_group_get_free_stats ...passed
00:06:10.097  
00:06:10.097  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.097                suites      1      1    n/a      0        0
00:06:10.097                 tests      5      5      5      0        0
00:06:10.097               asserts     75     75     75      0      n/a
00:06:10.097  
00:06:10.097  Elapsed time =    0.000 seconds
00:06:10.097   23:39:40	-- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:06:10.097  
00:06:10.097  
00:06:10.097       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.097       http://cunit.sourceforge.net/
00:06:10.097  
00:06:10.097  
00:06:10.097  Suite: nvme_quirks
00:06:10.097    Test: test_nvme_quirks_striping ...passed
00:06:10.097  
00:06:10.097  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.097                suites      1      1    n/a      0        0
00:06:10.097                 tests      1      1      1      0        0
00:06:10.097               asserts      5      5      5      0      n/a
00:06:10.097  
00:06:10.097  Elapsed time =    0.000 seconds
00:06:10.097   23:39:40	-- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:06:10.357  
00:06:10.357  
00:06:10.357       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.357       http://cunit.sourceforge.net/
00:06:10.357  
00:06:10.357  
00:06:10.357  Suite: nvme_tcp
00:06:10.357    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:06:10.357    Test: test_nvme_tcp_build_iovs ...passed
00:06:10.357    Test: test_nvme_tcp_build_sgl_request ...[2024-12-13 23:39:40.835050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd7508d0c0, and the iovcnt=16, remaining_size=28672
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed
00:06:10.357    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:06:10.357    Test: test_nvme_tcp_req_complete_safe ...passed
00:06:10.357    Test: test_nvme_tcp_req_get ...passed
00:06:10.357    Test: test_nvme_tcp_req_init ...passed
00:06:10.357    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:06:10.357    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:06:10.357    Test: test_nvme_tcp_qpair_set_recv_state ...[2024-12-13 23:39:40.835875] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508ede0 is same with the state(6) to be set
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_alloc_reqs ...passed
00:06:10.357    Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-12-13 23:39:40.836334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508df70 is same with the state(5) to be set
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_pdu_ch_handle ...[2024-12-13 23:39:40.836475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd7508eaa0
00:06:10.357  [2024-12-13 23:39:40.836565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:06:10.357  [2024-12-13 23:39:40.836722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.836841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:06:10.357  [2024-12-13 23:39:40.837000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:06:10.357  [2024-12-13 23:39:40.837162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_qpair_connect_sock ...[2024-12-13 23:39:40.837649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508e430 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.837938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:06:10.357  [2024-12-13 23:39:40.838047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:06:10.357  [2024-12-13 23:39:40.838398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_qpair_icreq_send ...passed
00:06:10.357    Test: test_nvme_tcp_c2h_payload_handle ...[2024-12-13 23:39:40.838641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd7508e5e0): PDU Sequence Error
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_icresp_handle ...[2024-12-13 23:39:40.838895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:06:10.357  [2024-12-13 23:39:40.838980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:06:10.357  [2024-12-13 23:39:40.839061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508df80 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.839174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:06:10.357  [2024-12-13 23:39:40.839270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508df80 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.839371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508df80 is same with the state(0) to be set
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_pdu_payload_handle ...passed
00:06:10.357    Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-12-13 23:39:40.839474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd7508eaa0): PDU Sequence Error
00:06:10.357  [2024-12-13 23:39:40.839623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd7508d260
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_ctrlr_connect_qpair ...passed
00:06:10.357    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-12-13 23:39:40.839938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd7508c8e0, errno=0, rc=0
00:06:10.357  [2024-12-13 23:39:40.840050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508c8e0 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.840177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7508c8e0 is same with the state(5) to be set
00:06:10.357  [2024-12-13 23:39:40.840299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd7508c8e0 (0): Success
00:06:10.357  [2024-12-13 23:39:40.840389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd7508c8e0 (0): Success
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-12-13 23:39:40.971799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-12-13 23:39:40.971944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_poll_group_get_stats ...[2024-12-13 23:39:40.972332] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_ctrlr_construct ...[2024-12-13 23:39:40.972416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:10.357  [2024-12-13 23:39:40.972766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:06:10.357  [2024-12-13 23:39:40.972885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:06:10.357  [2024-12-13 23:39:40.973101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:06:10.357  [2024-12-13 23:39:40.973213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:06:10.357  [2024-12-13 23:39:40.973413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23
00:06:10.357  [2024-12-13 23:39:40.973525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:06:10.357  passed
00:06:10.357    Test: test_nvme_tcp_qpair_submit_request ...[2024-12-13 23:39:40.973808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024
00:06:10.357  passed[2024-12-13 23:39:40.973909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:06:10.357  
00:06:10.357  
00:06:10.357  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.357                suites      1      1    n/a      0        0
00:06:10.357                 tests     27     27     27      0        0
00:06:10.357               asserts    624    624    624      0      n/a
00:06:10.357  
00:06:10.357  Elapsed time =    0.138 seconds
00:06:10.357   23:39:41	-- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:06:10.357  
00:06:10.357  
00:06:10.357       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.357       http://cunit.sourceforge.net/
00:06:10.357  
00:06:10.357  
00:06:10.357  Suite: nvme_transport
00:06:10.357    Test: test_nvme_get_transport ...passed
00:06:10.357    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:06:10.357    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:06:10.357    Test: test_nvme_transport_poll_group_add_remove ...passed
00:06:10.357    Test: test_ctrlr_get_memory_domains ...passed
00:06:10.357  
00:06:10.357  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.357                suites      1      1    n/a      0        0
00:06:10.357                 tests      5      5      5      0        0
00:06:10.357               asserts     28     28     28      0      n/a
00:06:10.357  
00:06:10.357  Elapsed time =    0.000 seconds
00:06:10.357   23:39:41	-- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:06:10.357  
00:06:10.357  
00:06:10.357       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.357       http://cunit.sourceforge.net/
00:06:10.357  
00:06:10.357  
00:06:10.357  Suite: nvme_io_msg
00:06:10.357    Test: test_nvme_io_msg_send ...passed
00:06:10.357    Test: test_nvme_io_msg_process ...passed
00:06:10.357    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:06:10.357  
00:06:10.357  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.357                suites      1      1    n/a      0        0
00:06:10.357                 tests      3      3      3      0        0
00:06:10.357               asserts     56     56     56      0      n/a
00:06:10.358  
00:06:10.358  Elapsed time =    0.000 seconds
00:06:10.358   23:39:41	-- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:06:10.358  
00:06:10.358  
00:06:10.358       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.358       http://cunit.sourceforge.net/
00:06:10.358  
00:06:10.358  
00:06:10.358  Suite: nvme_pcie_common
00:06:10.358    Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-12-13 23:39:41.069502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:  87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:06:10.358  passed
00:06:10.358    Test: test_nvme_pcie_qpair_construct_destroy ...passed
00:06:10.358    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:06:10.358    Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-12-13 23:39:41.070289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:06:10.358  [2024-12-13 23:39:41.070438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:06:10.358  [2024-12-13 23:39:41.070491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:06:10.358  passed
00:06:10.358    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed
00:06:10.358    Test: test_nvme_pcie_poll_group_get_stats ...[2024-12-13 23:39:41.070937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:10.358  passed[2024-12-13 23:39:41.071011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:10.358  
00:06:10.358  
00:06:10.358  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.358                suites      1      1    n/a      0        0
00:06:10.358                 tests      6      6      6      0        0
00:06:10.358               asserts    148    148    148      0      n/a
00:06:10.358  
00:06:10.358  Elapsed time =    0.002 seconds
00:06:10.358   23:39:41	-- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:06:10.616  
00:06:10.616  
00:06:10.616       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.616       http://cunit.sourceforge.net/
00:06:10.616  
00:06:10.616  
00:06:10.616  Suite: nvme_fabric
00:06:10.616    Test: test_nvme_fabric_prop_set_cmd ...passed
00:06:10.616    Test: test_nvme_fabric_prop_get_cmd ...passed
00:06:10.616    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:06:10.616    Test: test_nvme_fabric_discover_probe ...passed
00:06:10.616    Test: test_nvme_fabric_qpair_connect ...[2024-12-13 23:39:41.094797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:06:10.616  passed
00:06:10.616  
00:06:10.616  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.616                suites      1      1    n/a      0        0
00:06:10.616                 tests      5      5      5      0        0
00:06:10.616               asserts     60     60     60      0      n/a
00:06:10.616  
00:06:10.616  Elapsed time =    0.001 seconds
00:06:10.616   23:39:41	-- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:06:10.616  
00:06:10.616  
00:06:10.616       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.616       http://cunit.sourceforge.net/
00:06:10.616  
00:06:10.616  
00:06:10.616  Suite: nvme_opal
00:06:10.616    Test: test_opal_nvme_security_recv_send_done ...passed
00:06:10.616    Test: test_opal_add_short_atom_header ...[2024-12-13 23:39:41.122191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:06:10.616  passed
00:06:10.616  
00:06:10.616  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:10.616                suites      1      1    n/a      0        0
00:06:10.617                 tests      2      2      2      0        0
00:06:10.617               asserts     22     22     22      0      n/a
00:06:10.617  
00:06:10.617  Elapsed time =    0.001 seconds
00:06:10.617  
00:06:10.617  real	0m1.210s
00:06:10.617  user	0m0.649s
00:06:10.617  sys	0m0.409s
00:06:10.617   23:39:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:10.617   23:39:41	-- common/autotest_common.sh@10 -- # set +x
00:06:10.617  ************************************
00:06:10.617  END TEST unittest_nvme
00:06:10.617  ************************************
00:06:10.617   23:39:41	-- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:06:10.617   23:39:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:10.617   23:39:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:10.617   23:39:41	-- common/autotest_common.sh@10 -- # set +x
00:06:10.617  ************************************
00:06:10.617  START TEST unittest_log
00:06:10.617  ************************************
00:06:10.617   23:39:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:06:10.617  
00:06:10.617  
00:06:10.617       CUnit - A unit testing framework for C - Version 2.1-3
00:06:10.617       http://cunit.sourceforge.net/
00:06:10.617  
00:06:10.617  
00:06:10.617  Suite: log
00:06:10.617    Test: log_test ...[2024-12-13 23:39:41.196391] log_ut.c:  54:log_test: *WARNING*: log warning unit test
00:06:10.617  [2024-12-13 23:39:41.196690] log_ut.c:  55:log_test: *DEBUG*: log test
00:06:10.617  log dump test:
00:06:10.617  00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:06:10.617  passed
00:06:10.617    Test: deprecation ...spdk dump test:
00:06:10.617  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:06:10.617  spdk dump test:
00:06:10.617  00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:06:10.617  00000010  65 20 63 68 61 72 73                              e chars
00:06:11.553  passed
00:06:11.553  
00:06:11.553  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:11.553                suites      1      1    n/a      0        0
00:06:11.553                 tests      2      2      2      0        0
00:06:11.553               asserts     73     73     73      0      n/a
00:06:11.553  
00:06:11.553  Elapsed time =    0.001 seconds
00:06:11.553  
00:06:11.553  real	0m1.026s
00:06:11.553  user	0m0.017s
00:06:11.553  sys	0m0.009s
00:06:11.553   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:11.553  ************************************
00:06:11.553   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.553  END TEST unittest_log
00:06:11.553  ************************************
00:06:11.553   23:39:42	-- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:06:11.553   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:11.553   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:11.553   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.553  ************************************
00:06:11.553  START TEST unittest_lvol
00:06:11.553  ************************************
00:06:11.553   23:39:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:06:11.553  
00:06:11.553  
00:06:11.553       CUnit - A unit testing framework for C - Version 2.1-3
00:06:11.553       http://cunit.sourceforge.net/
00:06:11.553  
00:06:11.553  
00:06:11.553  Suite: lvol
00:06:11.553    Test: lvs_init_unload_success ...[2024-12-13 23:39:42.278509] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:06:11.553  passed
00:06:11.553    Test: lvs_init_destroy_success ...[2024-12-13 23:39:42.279568] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:06:11.553  passed
00:06:11.553    Test: lvs_init_opts_success ...passed
00:06:11.553    Test: lvs_unload_lvs_is_null_fail ...[2024-12-13 23:39:42.279970] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:06:11.553  passed
00:06:11.553    Test: lvs_names ...[2024-12-13 23:39:42.280214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified.
00:06:11.553  [2024-12-13 23:39:42.280402] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator.
00:06:11.553  [2024-12-13 23:39:42.280741] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:06:11.553  passed
00:06:11.553    Test: lvol_create_destroy_success ...passed
00:06:11.553    Test: lvol_create_fail ...[2024-12-13 23:39:42.281484] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:06:11.553  [2024-12-13 23:39:42.281804] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist
00:06:11.553  passed
00:06:11.553    Test: lvol_destroy_fail ...[2024-12-13 23:39:42.282314] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:06:11.553  passed
00:06:11.553    Test: lvol_close ...[2024-12-13 23:39:42.282705] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist
00:06:11.553  [2024-12-13 23:39:42.282913] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:06:11.553  passed
00:06:11.553    Test: lvol_resize ...passed
00:06:11.553    Test: lvol_set_read_only ...passed
00:06:11.553    Test: test_lvs_load ...[2024-12-13 23:39:42.283906] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:06:11.553  [2024-12-13 23:39:42.284105] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:06:11.553  passed
00:06:11.553    Test: lvols_load ...[2024-12-13 23:39:42.284542] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:06:11.553  [2024-12-13 23:39:42.284824] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:06:11.553  passed
00:06:11.553    Test: lvol_open ...passed
00:06:11.813    Test: lvol_snapshot ...passed
00:06:11.813    Test: lvol_snapshot_fail ...[2024-12-13 23:39:42.286464] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:06:11.813  passed
00:06:11.813    Test: lvol_clone ...passed
00:06:11.813    Test: lvol_clone_fail ...[2024-12-13 23:39:42.287300] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:06:11.813  passed
00:06:11.813    Test: lvol_iter_clones ...passed
00:06:11.813    Test: lvol_refcnt ...[2024-12-13 23:39:42.287979] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 052b75d2-6b51-40d6-ad10-033015b9aa75 because it is still open
00:06:11.813  passed
00:06:11.813    Test: lvol_names ...[2024-12-13 23:39:42.288358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:06:11.813  [2024-12-13 23:39:42.288610] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:06:11.813  [2024-12-13 23:39:42.289016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:06:11.813  passed
00:06:11.813    Test: lvol_create_thin_provisioned ...passed
00:06:11.813    Test: lvol_rename ...[2024-12-13 23:39:42.290794] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:06:11.813  [2024-12-13 23:39:42.291075] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:06:11.813  passed
00:06:11.813    Test: lvs_rename ...[2024-12-13 23:39:42.291655] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:06:11.813  passed
00:06:11.813    Test: lvol_inflate ...[2024-12-13 23:39:42.292236] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:06:11.813  passed
00:06:11.813    Test: lvol_decouple_parent ...[2024-12-13 23:39:42.292674] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:06:11.813  passed
00:06:11.813    Test: lvol_get_xattr ...passed
00:06:11.813    Test: lvol_esnap_reload ...passed
00:06:11.813    Test: lvol_esnap_create_bad_args ...[2024-12-13 23:39:42.293754] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:06:11.813  [2024-12-13 23:39:42.293940] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:06:11.813  [2024-12-13 23:39:42.294314] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:06:11.813  [2024-12-13 23:39:42.294443] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:06:11.813  [2024-12-13 23:39:42.294898] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:06:11.813  passed
00:06:11.813    Test: lvol_esnap_create_delete ...passed
00:06:11.813    Test: lvol_esnap_load_esnaps ...passed
00:06:11.813    Test: lvol_esnap_missing ...[2024-12-13 23:39:42.295607] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:06:11.813  [2024-12-13 23:39:42.295996] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:06:11.813  [2024-12-13 23:39:42.296072] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:06:11.813  passed
00:06:11.813    Test: lvol_esnap_hotplug ...
00:06:11.813  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:06:11.813  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:06:11.813  [2024-12-13 23:39:42.297216] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d9d70c81-5a21-4b20-a3c1-8b9e13b5d187: failed to create esnap bs_dev: error -12
00:06:11.813  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM
00:06:11.813  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:06:11.813  [2024-12-13 23:39:42.298022] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7684f18f-ce61-43e7-aabc-bfaa07f52beb: failed to create esnap bs_dev: error -12
00:06:11.813  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:06:11.814  [2024-12-13 23:39:42.298416] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 36247617-9422-4a0c-acbe-7995d568c023: failed to create esnap bs_dev: error -12
00:06:11.814  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:06:11.814  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:06:11.814  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:06:11.814  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:06:11.814  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:06:11.814  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:06:11.814  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:06:11.814  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:06:11.814  passed
00:06:11.814    Test: lvol_get_by ...passed
00:06:11.814  
00:06:11.814  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:11.814                suites      1      1    n/a      0        0
00:06:11.814                 tests     34     34     34      0        0
00:06:11.814               asserts   1439   1439   1439      0      n/a
00:06:11.814  
00:06:11.814  Elapsed time =    0.019 seconds
00:06:11.814  
00:06:11.814  real	0m0.059s
00:06:11.814  user	0m0.035s
00:06:11.814  sys	0m0.021s
00:06:11.814   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:11.814   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.814  ************************************
00:06:11.814  END TEST unittest_lvol
00:06:11.814  ************************************
00:06:11.814   23:39:42	-- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:11.814   23:39:42	-- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:06:11.814   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:11.814   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:11.814   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.814  ************************************
00:06:11.814  START TEST unittest_nvme_rdma
00:06:11.814  ************************************
00:06:11.814   23:39:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:06:11.814  
00:06:11.814  
00:06:11.814       CUnit - A unit testing framework for C - Version 2.1-3
00:06:11.814       http://cunit.sourceforge.net/
00:06:11.814  
00:06:11.814  
00:06:11.814  Suite: nvme_rdma
00:06:11.814    Test: test_nvme_rdma_build_sgl_request ...[2024-12-13 23:39:42.381714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:06:11.814  [2024-12-13 23:39:42.382279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:06:11.814    Test: test_nvme_rdma_build_contig_request ...[2024-12-13 23:39:42.382494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:06:11.814  [2024-12-13 23:39:42.382701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_build_contig_inline_request ...passed
00:06:11.814    Test: test_nvme_rdma_create_reqs ...passed
00:06:11.814    Test: test_nvme_rdma_create_rsps ...passed
00:06:11.814    Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-12-13 23:39:42.382963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:06:11.814  [2024-12-13 23:39:42.383532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_poller_create ...[2024-12-13 23:39:42.383832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:06:11.814  [2024-12-13 23:39:42.383937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:06:11.814    Test: test_nvme_rdma_ctrlr_construct ...[2024-12-13 23:39:42.384279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_req_put_and_get ...passed
00:06:11.814    Test: test_nvme_rdma_req_init ...passed
00:06:11.814    Test: test_nvme_rdma_validate_cm_event ...[2024-12-13 23:39:42.384686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_qpair_init ...[2024-12-13 23:39:42.384751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_qpair_submit_request ...passed
00:06:11.814    Test: test_nvme_rdma_memory_domain ...passed
00:06:11.814    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:06:11.814    Test: test_rdma_get_memory_translation ...passed
00:06:11.814    Test: test_get_rdma_qpair_from_wc ...passed
00:06:11.814    Test: test_nvme_rdma_ctrlr_get_max_sges ...passed
00:06:11.814    Test: test_nvme_rdma_poll_group_get_stats ...[2024-12-13 23:39:42.384981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain
00:06:11.814  [2024-12-13 23:39:42.385096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:06:11.814  [2024-12-13 23:39:42.385162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:06:11.814  [2024-12-13 23:39:42.385328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:11.814  [2024-12-13 23:39:42.385411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:06:11.814  passed
00:06:11.814    Test: test_nvme_rdma_qpair_set_poller ...[2024-12-13 23:39:42.385616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:06:11.814  [2024-12-13 23:39:42.385723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:06:11.814  [2024-12-13 23:39:42.385808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdeebd1cf0 on poll group 0x60b0000001a0
00:06:11.814  [2024-12-13 23:39:42.385929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:06:11.814  [2024-12-13 23:39:42.386048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil)
00:06:11.814  [2024-12-13 23:39:42.386160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdeebd1cf0 on poll group 0x60b0000001a0
00:06:11.814  [2024-12-13 23:39:42.386306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:06:11.814  passed
00:06:11.814  
00:06:11.814  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:11.814                suites      1      1    n/a      0        0
00:06:11.814                 tests     22     22     22      0        0
00:06:11.814               asserts    412    412    412      0      n/a
00:06:11.814  
00:06:11.814  Elapsed time =    0.005 seconds
00:06:11.814  
00:06:11.814  real	0m0.030s
00:06:11.814  user	0m0.020s
00:06:11.814  sys	0m0.010s
00:06:11.814   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:11.814   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.814  ************************************
00:06:11.814  END TEST unittest_nvme_rdma
00:06:11.814  ************************************
00:06:11.814   23:39:42	-- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:06:11.814   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:11.814   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:11.814   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.814  ************************************
00:06:11.814  START TEST unittest_nvmf_transport
00:06:11.814  ************************************
00:06:11.814   23:39:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:06:11.814  
00:06:11.814  
00:06:11.814       CUnit - A unit testing framework for C - Version 2.1-3
00:06:11.814       http://cunit.sourceforge.net/
00:06:11.814  
00:06:11.814  
00:06:11.814  Suite: nvmf
00:06:11.814    Test: test_spdk_nvmf_transport_create ...[2024-12-13 23:39:42.464281] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:06:11.814  [2024-12-13 23:39:42.464699] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:06:11.814  [2024-12-13 23:39:42.464782] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:06:11.814  [2024-12-13 23:39:42.464931] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:06:11.814  passed
00:06:11.814    Test: test_nvmf_transport_poll_group_create ...passed
00:06:11.814    Test: test_spdk_nvmf_transport_opts_init ...[2024-12-13 23:39:42.465200] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:06:11.814  [2024-12-13 23:39:42.465299] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:06:11.814  [2024-12-13 23:39:42.465338] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:06:11.814  passed
00:06:11.814    Test: test_spdk_nvmf_transport_listen_ext ...passed
00:06:11.814  
00:06:11.814  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:11.814                suites      1      1    n/a      0        0
00:06:11.814                 tests      4      4      4      0        0
00:06:11.814               asserts     49     49     49      0      n/a
00:06:11.814  
00:06:11.814  Elapsed time =    0.001 seconds
00:06:11.814  
00:06:11.814  real	0m0.044s
00:06:11.814  user	0m0.023s
00:06:11.814  sys	0m0.019s
00:06:11.814   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:11.814   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.814  ************************************
00:06:11.814  END TEST unittest_nvmf_transport
00:06:11.814  ************************************
00:06:11.814   23:39:42	-- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:06:11.814   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:11.814   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:11.815   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:11.815  ************************************
00:06:11.815  START TEST unittest_rdma
00:06:11.815  ************************************
00:06:11.815   23:39:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:06:12.074  
00:06:12.074  
00:06:12.074       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.074       http://cunit.sourceforge.net/
00:06:12.074  
00:06:12.074  
00:06:12.074  Suite: rdma_common
00:06:12.074    Test: test_spdk_rdma_pd ...[2024-12-13 23:39:42.556227] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD
00:06:12.074  [2024-12-13 23:39:42.556758] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD
00:06:12.074  passed
00:06:12.074  
00:06:12.074  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.074                suites      1      1    n/a      0        0
00:06:12.074                 tests      1      1      1      0        0
00:06:12.074               asserts     31     31     31      0      n/a
00:06:12.074  
00:06:12.074  Elapsed time =    0.001 seconds
00:06:12.074  
00:06:12.074  real	0m0.032s
00:06:12.074  user	0m0.010s
00:06:12.074  sys	0m0.022s
00:06:12.074   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.074   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:12.074  ************************************
00:06:12.074  END TEST unittest_rdma
00:06:12.074  ************************************
00:06:12.074   23:39:42	-- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:12.074   23:39:42	-- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:06:12.074   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.074   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.074   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:12.074  ************************************
00:06:12.074  START TEST unittest_nvme_cuse
00:06:12.074  ************************************
00:06:12.074   23:39:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:06:12.074  
00:06:12.074  
00:06:12.074       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.074       http://cunit.sourceforge.net/
00:06:12.074  
00:06:12.074  
00:06:12.074  Suite: nvme_cuse
00:06:12.074    Test: test_cuse_nvme_submit_io_read_write ...passed
00:06:12.074    Test: test_cuse_nvme_submit_io_read_write_with_md ...passed
00:06:12.074    Test: test_cuse_nvme_submit_passthru_cmd ...passed
00:06:12.074    Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed
00:06:12.074    Test: test_nvme_cuse_get_cuse_ns_device ...passed
00:06:12.074    Test: test_cuse_nvme_submit_io ...[2024-12-13 23:39:42.646536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid
00:06:12.074  passed
00:06:12.074    Test: test_cuse_nvme_reset ...[2024-12-13 23:39:42.646891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported
00:06:12.074  passed
00:06:12.074    Test: test_nvme_cuse_stop ...passed
00:06:12.074    Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed
00:06:12.074  
00:06:12.074  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.074                suites      1      1    n/a      0        0
00:06:12.074                 tests      9      9      9      0        0
00:06:12.074               asserts    121    121    121      0      n/a
00:06:12.074  
00:06:12.074  Elapsed time =    0.002 seconds
00:06:12.074  
00:06:12.074  real	0m0.032s
00:06:12.074  user	0m0.016s
00:06:12.074  sys	0m0.016s
00:06:12.074   23:39:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.074   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:12.074  ************************************
00:06:12.074  END TEST unittest_nvme_cuse
00:06:12.074  ************************************
00:06:12.074   23:39:42	-- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf
00:06:12.074   23:39:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.074   23:39:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.074   23:39:42	-- common/autotest_common.sh@10 -- # set +x
00:06:12.074  ************************************
00:06:12.074  START TEST unittest_nvmf
00:06:12.074  ************************************
00:06:12.074   23:39:42	-- common/autotest_common.sh@1114 -- # unittest_nvmf
00:06:12.074   23:39:42	-- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:06:12.074  
00:06:12.074  
00:06:12.074       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.074       http://cunit.sourceforge.net/
00:06:12.074  
00:06:12.074  
00:06:12.074  Suite: nvmf
00:06:12.074    Test: test_get_log_page ...[2024-12-13 23:39:42.731673] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:06:12.074  passed
00:06:12.074    Test: test_process_fabrics_cmd ...passed
00:06:12.074    Test: test_connect ...[2024-12-13 23:39:42.732900] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:06:12.074  [2024-12-13 23:39:42.733157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:06:12.074  [2024-12-13 23:39:42.733358] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:06:12.074  [2024-12-13 23:39:42.733556] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:06:12.074  [2024-12-13 23:39:42.733842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:06:12.074  [2024-12-13 23:39:42.734032] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:06:12.074  [2024-12-13 23:39:42.734291] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:06:12.074  [2024-12-13 23:39:42.734468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:06:12.074  [2024-12-13 23:39:42.734746] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:06:12.074  [2024-12-13 23:39:42.734968] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:06:12.074  [2024-12-13 23:39:42.735424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:06:12.074  [2024-12-13 23:39:42.735662] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:06:12.074  [2024-12-13 23:39:42.735913] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:06:12.074  [2024-12-13 23:39:42.736145] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:06:12.074  [2024-12-13 23:39:42.736399] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1
00:06:12.075  [2024-12-13 23:39:42.736671] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil))
00:06:12.075  passed
00:06:12.075    Test: test_get_ns_id_desc_list ...passed
00:06:12.075    Test: test_identify_ns ...[2024-12-13 23:39:42.737074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:06:12.075  [2024-12-13 23:39:42.737457] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:06:12.075  [2024-12-13 23:39:42.737775] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:06:12.075  passed
00:06:12.075    Test: test_identify_ns_iocs_specific ...[2024-12-13 23:39:42.738086] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:06:12.075  [2024-12-13 23:39:42.738496] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:06:12.075  passed
00:06:12.075    Test: test_reservation_write_exclusive ...passed
00:06:12.075    Test: test_reservation_exclusive_access ...passed
00:06:12.075    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:06:12.075    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:06:12.075    Test: test_reservation_notification_log_page ...passed
00:06:12.075    Test: test_get_dif_ctx ...passed
00:06:12.075    Test: test_set_get_features ...[2024-12-13 23:39:42.739154] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:06:12.075  [2024-12-13 23:39:42.739325] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:06:12.075  [2024-12-13 23:39:42.739490] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:06:12.075  [2024-12-13 23:39:42.739694] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:06:12.075  passed
00:06:12.075    Test: test_identify_ctrlr ...passed
00:06:12.075    Test: test_identify_ctrlr_iocs_specific ...passed
00:06:12.075    Test: test_custom_admin_cmd ...passed
00:06:12.075    Test: test_fused_compare_and_write ...[2024-12-13 23:39:42.740289] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:06:12.075  [2024-12-13 23:39:42.740481] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:06:12.075  [2024-12-13 23:39:42.740682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:06:12.075  passed
00:06:12.075    Test: test_multi_async_event_reqs ...passed
00:06:12.075    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:06:12.075    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:06:12.075    Test: test_multi_async_events ...passed
00:06:12.075    Test: test_rae ...passed
00:06:12.075    Test: test_nvmf_ctrlr_create_destruct ...passed
00:06:12.075    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:06:12.075    Test: test_spdk_nvmf_request_zcopy_start ...[2024-12-13 23:39:42.741295] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT
00:06:12.075  passed
00:06:12.075    Test: test_zcopy_read ...passed
00:06:12.075    Test: test_zcopy_write ...passed
00:06:12.075    Test: test_nvmf_property_set ...passed
00:06:12.075    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-12-13 23:39:42.741591] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:06:12.075  [2024-12-13 23:39:42.741800] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:06:12.075  passed
00:06:12.075    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-12-13 23:39:42.742005] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:06:12.075  [2024-12-13 23:39:42.742181] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:06:12.075  [2024-12-13 23:39:42.742356] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:06:12.075  passed
00:06:12.075  
00:06:12.075  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.075                suites      1      1    n/a      0        0
00:06:12.075                 tests     30     30     30      0        0
00:06:12.075               asserts    885    885    885      0      n/a
00:06:12.075  
00:06:12.075  Elapsed time =    0.007 seconds
00:06:12.075   23:39:42	-- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:06:12.075  
00:06:12.075  
00:06:12.075       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.075       http://cunit.sourceforge.net/
00:06:12.075  
00:06:12.075  
00:06:12.075  Suite: nvmf
00:06:12.075    Test: test_get_rw_params ...passed
00:06:12.075    Test: test_lba_in_range ...passed
00:06:12.075    Test: test_get_dif_ctx ...passed
00:06:12.075    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:06:12.075    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-12-13 23:39:42.773356] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:06:12.075  [2024-12-13 23:39:42.773634] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:06:12.075  [2024-12-13 23:39:42.773745] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:06:12.075  passed
00:06:12.075    Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-12-13 23:39:42.773805] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:06:12.075  [2024-12-13 23:39:42.773891] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:06:12.075  passed
00:06:12.075    Test: test_nvmf_bdev_ctrlr_cmd ...[2024-12-13 23:39:42.774002] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:06:12.075  [2024-12-13 23:39:42.774045] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:06:12.075  [2024-12-13 23:39:42.774119] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:06:12.075  passed
00:06:12.075    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed[2024-12-13 23:39:42.774171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:06:12.075  
00:06:12.075    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed
00:06:12.075  
00:06:12.075  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.075                suites      1      1    n/a      0        0
00:06:12.075                 tests      9      9      9      0        0
00:06:12.075               asserts    157    157    157      0      n/a
00:06:12.075  
00:06:12.075  Elapsed time =    0.001 seconds
00:06:12.075   23:39:42	-- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:06:12.075  
00:06:12.075  
00:06:12.075       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.075       http://cunit.sourceforge.net/
00:06:12.075  
00:06:12.075  
00:06:12.075  Suite: nvmf
00:06:12.075    Test: test_discovery_log ...passed
00:06:12.075    Test: test_discovery_log_with_filters ...passed
00:06:12.075  
00:06:12.075  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.075                suites      1      1    n/a      0        0
00:06:12.075                 tests      2      2      2      0        0
00:06:12.075               asserts    238    238    238      0      n/a
00:06:12.075  
00:06:12.075  Elapsed time =    0.003 seconds
00:06:12.335   23:39:42	-- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:06:12.335  
00:06:12.335  
00:06:12.335       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.335       http://cunit.sourceforge.net/
00:06:12.335  
00:06:12.335  
00:06:12.335  Suite: nvmf
00:06:12.335    Test: nvmf_test_create_subsystem ...[2024-12-13 23:39:42.841124] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:06:12.335  [2024-12-13 23:39:42.841478] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:06:12.335  [2024-12-13 23:39:42.841623] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:06:12.335  [2024-12-13 23:39:42.841678] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:06:12.335  [2024-12-13 23:39:42.841715] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:06:12.335  [2024-12-13 23:39:42.841783] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:06:12.335  [2024-12-13 23:39:42.841921] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:06:12.335  [2024-12-13 23:39:42.842117] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:06:12.335  [2024-12-13 23:39:42.842262] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:06:12.335  [2024-12-13 23:39:42.842337] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:06:12.335  [2024-12-13 23:39:42.842378] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:06:12.335  passed
00:06:12.335    Test: test_spdk_nvmf_subsystem_add_ns ...[2024-12-13 23:39:42.842575] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:06:12.335  passed
00:06:12.335    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:06:12.335    Test: test_reservation_register ...[2024-12-13 23:39:42.842718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:06:12.335  [2024-12-13 23:39:42.843016] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  [2024-12-13 23:39:42.843172] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant
00:06:12.335  passed
00:06:12.335    Test: test_reservation_register_with_ptpl ...passed
00:06:12.335    Test: test_reservation_acquire_preempt_1 ...[2024-12-13 23:39:42.844240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_acquire_release_with_ptpl ...passed
00:06:12.335    Test: test_reservation_release ...[2024-12-13 23:39:42.845969] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_unregister_notification ...[2024-12-13 23:39:42.846215] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_release_notification ...[2024-12-13 23:39:42.846475] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_release_notification_write_exclusive ...[2024-12-13 23:39:42.846746] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_clear_notification ...[2024-12-13 23:39:42.846995] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_reservation_preempt_notification ...[2024-12-13 23:39:42.847228] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:06:12.335  passed
00:06:12.335    Test: test_spdk_nvmf_ns_event ...passed
00:06:12.335    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:06:12.335    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:06:12.335    Test: test_spdk_nvmf_subsystem_add_host ...[2024-12-13 23:39:42.848046] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:06:12.335  [2024-12-13 23:39:42.848173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport
00:06:12.335  passed
00:06:12.335    Test: test_nvmf_ns_reservation_report ...[2024-12-13 23:39:42.848322] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:06:12.335  passed
00:06:12.335    Test: test_nvmf_nqn_is_valid ...passed
00:06:12.335    Test: test_nvmf_ns_reservation_restore ...[2024-12-13 23:39:42.848404] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:06:12.335  [2024-12-13 23:39:42.848445] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:9a57c24a-9fa9-4b34-b42f-2c1d69674cc": uuid is not the correct length
00:06:12.335  [2024-12-13 23:39:42.848484] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:06:12.335  [2024-12-13 23:39:42.848599] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:06:12.335  passed
00:06:12.335    Test: test_nvmf_subsystem_state_change ...passed
00:06:12.335    Test: test_nvmf_reservation_custom_ops ...passed
00:06:12.335  
00:06:12.335  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.335                suites      1      1    n/a      0        0
00:06:12.335                 tests     22     22     22      0        0
00:06:12.335               asserts    407    407    407      0      n/a
00:06:12.335  
00:06:12.335  Elapsed time =    0.009 seconds
00:06:12.335   23:39:42	-- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:06:12.335  
00:06:12.335  
00:06:12.335       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.335       http://cunit.sourceforge.net/
00:06:12.335  
00:06:12.335  
00:06:12.335  Suite: nvmf
00:06:12.335    Test: test_nvmf_tcp_create ...[2024-12-13 23:39:42.906969] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:06:12.335  passed
00:06:12.335    Test: test_nvmf_tcp_destroy ...passed
00:06:12.335    Test: test_nvmf_tcp_poll_group_create ...passed
00:06:12.335    Test: test_nvmf_tcp_send_c2h_data ...passed
00:06:12.335    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:06:12.335    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:06:12.335    Test: test_nvmf_tcp_qpair_init_mem_resource ...passed
00:06:12.335    Test: test_nvmf_tcp_send_c2h_term_req ...[2024-12-13 23:39:43.015887] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.335  passed
00:06:12.335    Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed
00:06:12.336    Test: test_nvmf_tcp_icreq_handle ...[2024-12-13 23:39:43.015975] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016105] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016159] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.016199] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016306] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:06:12.336  [2024-12-13 23:39:43.016420] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.016510] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016560] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:06:12.336  [2024-12-13 23:39:43.016617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016667] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.016716] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.016766] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.016833] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_check_xfer_type ...passed
00:06:12.336    Test: test_nvmf_tcp_invalid_sgl ...[2024-12-13 23:39:43.016930] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-12-13 23:39:43.016984] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916a9be0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc916aa940
00:06:12.336  [2024-12-13 23:39:43.017213] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017283] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017339] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc916aa0a0
00:06:12.336  [2024-12-13 23:39:43.017401] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017446] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:06:12.336  [2024-12-13 23:39:43.017551] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017700] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:06:12.336  [2024-12-13 23:39:43.017763] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017822] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017862] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.017908] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.017977] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.018025] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.018086] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.018130] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.018178] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.018210] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.018287] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-12-13 23:39:43.018327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  [2024-12-13 23:39:43.018381] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:06:12.336  [2024-12-13 23:39:43.018419] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc916aa0a0 is same with the state(5) to be set
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-12-13 23:39:43.044209] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-12-13 23:39:43.044303] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:06:12.336  [2024-12-13 23:39:43.044728] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:06:12.336  passed
00:06:12.336    Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-12-13 23:39:43.044802] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key!
00:06:12.336  [2024-12-13 23:39:43.045054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:06:12.336  passed
00:06:12.336  
00:06:12.336  [2024-12-13 23:39:43.045125] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:06:12.336  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.336                suites      1      1    n/a      0        0
00:06:12.336                 tests     17     17     17      0        0
00:06:12.336               asserts    222    222    222      0      n/a
00:06:12.336  
00:06:12.336  Elapsed time =    0.162 seconds
00:06:12.595   23:39:43	-- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:06:12.595  
00:06:12.595  
00:06:12.595       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.595       http://cunit.sourceforge.net/
00:06:12.595  
00:06:12.595  
00:06:12.595  Suite: nvmf
00:06:12.595    Test: test_nvmf_tgt_create_poll_group ...passed
00:06:12.595  
00:06:12.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.595                suites      1      1    n/a      0        0
00:06:12.595                 tests      1      1      1      0        0
00:06:12.595               asserts     17     17     17      0      n/a
00:06:12.595  
00:06:12.595  Elapsed time =    0.022 seconds
00:06:12.595  
00:06:12.595  real	0m0.488s
00:06:12.595  user	0m0.209s
00:06:12.595  sys	0m0.277s
00:06:12.595   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.595   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.595  ************************************
00:06:12.595  END TEST unittest_nvmf
00:06:12.595  ************************************
00:06:12.595   23:39:43	-- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:12.595   23:39:43	-- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:12.595   23:39:43	-- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:06:12.595   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.595   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.595   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.595  ************************************
00:06:12.595  START TEST unittest_nvmf_rdma
00:06:12.595  ************************************
00:06:12.595   23:39:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:06:12.595  
00:06:12.595  
00:06:12.595       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.595       http://cunit.sourceforge.net/
00:06:12.595  
00:06:12.595  
00:06:12.595  Suite: nvmf
00:06:12.595    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-12-13 23:39:43.278359] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:06:12.595  [2024-12-13 23:39:43.278716] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:06:12.595  [2024-12-13 23:39:43.278783] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:06:12.595  passed
00:06:12.595    Test: test_spdk_nvmf_rdma_request_process ...passed
00:06:12.595    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:06:12.595    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:06:12.595    Test: test_nvmf_rdma_opts_init ...passed
00:06:12.595    Test: test_nvmf_rdma_request_free_data ...passed
00:06:12.595    Test: test_nvmf_rdma_update_ibv_state ...[2024-12-13 23:39:43.280031] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state!
00:06:12.595  [2024-12-13 23:39:43.280092] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue
00:06:12.595  passed
00:06:12.595    Test: test_nvmf_rdma_resources_create ...passed
00:06:12.595    Test: test_nvmf_rdma_qpair_compare ...passed
00:06:12.595    Test: test_nvmf_rdma_resize_cq ...[2024-12-13 23:39:43.281306] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:06:12.595  Using CQ of insufficient size may lead to CQ overrun
00:06:12.595  passed
00:06:12.595  
00:06:12.595  [2024-12-13 23:39:43.281428] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:06:12.595  [2024-12-13 23:39:43.281484] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:06:12.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.595                suites      1      1    n/a      0        0
00:06:12.595                 tests     10     10     10      0        0
00:06:12.595               asserts    584    584    584      0      n/a
00:06:12.595  
00:06:12.595  Elapsed time =    0.003 seconds
00:06:12.595  
00:06:12.595  real	0m0.043s
00:06:12.595  user	0m0.027s
00:06:12.595  sys	0m0.015s
00:06:12.595   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.595   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.595  ************************************
00:06:12.595  END TEST unittest_nvmf_rdma
00:06:12.595  ************************************
00:06:12.855   23:39:43	-- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:12.855   23:39:43	-- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi
00:06:12.855   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.855   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.855   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.855  ************************************
00:06:12.855  START TEST unittest_scsi
00:06:12.855  ************************************
00:06:12.855   23:39:43	-- common/autotest_common.sh@1114 -- # unittest_scsi
00:06:12.855   23:39:43	-- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:06:12.855  
00:06:12.855  
00:06:12.855       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.855       http://cunit.sourceforge.net/
00:06:12.855  
00:06:12.855  
00:06:12.855  Suite: dev_suite
00:06:12.855    Test: dev_destruct_null_dev ...passed
00:06:12.855    Test: dev_destruct_zero_luns ...passed
00:06:12.855    Test: dev_destruct_null_lun ...passed
00:06:12.855    Test: dev_destruct_success ...passed
00:06:12.855    Test: dev_construct_num_luns_zero ...passed
00:06:12.855    Test: dev_construct_no_lun_zero ...[2024-12-13 23:39:43.371314] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:06:12.855  [2024-12-13 23:39:43.371576] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:06:12.855  passed
00:06:12.855    Test: dev_construct_null_lun ...passed
00:06:12.855    Test: dev_construct_name_too_long ...[2024-12-13 23:39:43.371627] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:06:12.855  [2024-12-13 23:39:43.371673] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:06:12.855  passed
00:06:12.855    Test: dev_construct_success ...passed
00:06:12.855    Test: dev_construct_success_lun_zero_not_first ...passed
00:06:12.855    Test: dev_queue_mgmt_task_success ...passed
00:06:12.855    Test: dev_queue_task_success ...passed
00:06:12.855    Test: dev_stop_success ...passed
00:06:12.855    Test: dev_add_port_max_ports ...[2024-12-13 23:39:43.371930] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:06:12.855  passed
00:06:12.855    Test: dev_add_port_construct_failure1 ...[2024-12-13 23:39:43.372020] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:06:12.855  passed
00:06:12.855    Test: dev_add_port_construct_failure2 ...passed
00:06:12.855    Test: dev_add_port_success1 ...passed
00:06:12.855    Test: dev_add_port_success2 ...passed
00:06:12.855    Test: dev_add_port_success3 ...passed
00:06:12.855    Test: dev_find_port_by_id_num_ports_zero ...passed
00:06:12.855    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:06:12.855    Test: dev_find_port_by_id_success ...passed
00:06:12.855    Test: dev_add_lun_bdev_not_found ...passed
00:06:12.855    Test: dev_add_lun_no_free_lun_id ...[2024-12-13 23:39:43.372104] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:06:12.855  [2024-12-13 23:39:43.372417] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:06:12.855  passed
00:06:12.855    Test: dev_add_lun_success1 ...passed
00:06:12.855    Test: dev_add_lun_success2 ...passed
00:06:12.855    Test: dev_check_pending_tasks ...passed
00:06:12.855    Test: dev_iterate_luns ...passed
00:06:12.855    Test: dev_find_free_lun ...passed
00:06:12.855  
00:06:12.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.855                suites      1      1    n/a      0        0
00:06:12.855                 tests     29     29     29      0        0
00:06:12.855               asserts     97     97     97      0      n/a
00:06:12.855  
00:06:12.855  Elapsed time =    0.002 seconds
00:06:12.855   23:39:43	-- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:06:12.855  
00:06:12.855  
00:06:12.855       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.855       http://cunit.sourceforge.net/
00:06:12.855  
00:06:12.855  
00:06:12.855  Suite: lun_suite
00:06:12.855    Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-12-13 23:39:43.408163] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:06:12.855  passed
00:06:12.855    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-12-13 23:39:43.408796] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:06:12.855  passed
00:06:12.855    Test: lun_task_mgmt_execute_lun_reset ...passed
00:06:12.855    Test: lun_task_mgmt_execute_target_reset ...passed
00:06:12.855    Test: lun_task_mgmt_execute_invalid_case ...[2024-12-13 23:39:43.409320] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:06:12.855  passed
00:06:12.855    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:06:12.855    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:06:12.855    Test: lun_append_task_null_lun_not_supported ...passed
00:06:12.855    Test: lun_execute_scsi_task_pending ...passed
00:06:12.855    Test: lun_execute_scsi_task_complete ...passed
00:06:12.855    Test: lun_execute_scsi_task_resize ...passed
00:06:12.855    Test: lun_destruct_success ...passed
00:06:12.855    Test: lun_construct_null_ctx ...[2024-12-13 23:39:43.410130] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:06:12.855  passed
00:06:12.855    Test: lun_construct_success ...passed
00:06:12.855    Test: lun_reset_task_wait_scsi_task_complete ...passed
00:06:12.855    Test: lun_reset_task_suspend_scsi_task ...passed
00:06:12.855    Test: lun_check_pending_tasks_only_for_specific_initiator ...passed
00:06:12.855    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:06:12.855  
00:06:12.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.855                suites      1      1    n/a      0        0
00:06:12.855                 tests     18     18     18      0        0
00:06:12.855               asserts    153    153    153      0      n/a
00:06:12.855  
00:06:12.855  Elapsed time =    0.003 seconds
00:06:12.855   23:39:43	-- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:06:12.855  
00:06:12.855  
00:06:12.855       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.855       http://cunit.sourceforge.net/
00:06:12.855  
00:06:12.855  
00:06:12.855  Suite: scsi_suite
00:06:12.855    Test: scsi_init ...passed
00:06:12.855  
00:06:12.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.855                suites      1      1    n/a      0        0
00:06:12.855                 tests      1      1      1      0        0
00:06:12.855               asserts      1      1      1      0      n/a
00:06:12.855  
00:06:12.855  Elapsed time =    0.000 seconds
00:06:12.855   23:39:43	-- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:06:12.855  
00:06:12.855  
00:06:12.855       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.855       http://cunit.sourceforge.net/
00:06:12.855  
00:06:12.855  
00:06:12.855  Suite: translation_suite
00:06:12.855    Test: mode_select_6_test ...passed
00:06:12.855    Test: mode_select_6_test2 ...passed
00:06:12.855    Test: mode_sense_6_test ...passed
00:06:12.855    Test: mode_sense_10_test ...passed
00:06:12.855    Test: inquiry_evpd_test ...passed
00:06:12.855    Test: inquiry_standard_test ...passed
00:06:12.855    Test: inquiry_overflow_test ...passed
00:06:12.855    Test: task_complete_test ...passed
00:06:12.855    Test: lba_range_test ...passed
00:06:12.856    Test: xfer_len_test ...[2024-12-13 23:39:43.474003] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:06:12.856  passed
00:06:12.856    Test: xfer_test ...passed
00:06:12.856    Test: scsi_name_padding_test ...passed
00:06:12.856    Test: get_dif_ctx_test ...passed
00:06:12.856    Test: unmap_split_test ...passed
00:06:12.856  
00:06:12.856  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.856                suites      1      1    n/a      0        0
00:06:12.856                 tests     14     14     14      0        0
00:06:12.856               asserts   1200   1200   1200      0      n/a
00:06:12.856  
00:06:12.856  Elapsed time =    0.004 seconds
00:06:12.856   23:39:43	-- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:06:12.856  
00:06:12.856  
00:06:12.856       CUnit - A unit testing framework for C - Version 2.1-3
00:06:12.856       http://cunit.sourceforge.net/
00:06:12.856  
00:06:12.856  
00:06:12.856  Suite: reservation_suite
00:06:12.856    Test: test_reservation_register ...[2024-12-13 23:39:43.504860] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  passed
00:06:12.856    Test: test_reservation_reserve ...[2024-12-13 23:39:43.505310] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  [2024-12-13 23:39:43.505437] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:06:12.856  [2024-12-13 23:39:43.505626] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:06:12.856  passed
00:06:12.856    Test: test_reservation_preempt_non_all_regs ...[2024-12-13 23:39:43.505756] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  [2024-12-13 23:39:43.505864] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:06:12.856  passed
00:06:12.856    Test: test_reservation_preempt_all_regs ...[2024-12-13 23:39:43.506077] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  passed
00:06:12.856    Test: test_reservation_cmds_conflict ...[2024-12-13 23:39:43.506290] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  [2024-12-13 23:39:43.506424] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:06:12.856  [2024-12-13 23:39:43.506522] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:06:12.856  [2024-12-13 23:39:43.506595] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:06:12.856  [2024-12-13 23:39:43.506668] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:06:12.856  [2024-12-13 23:39:43.506764] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:06:12.856  passed
00:06:12.856    Test: test_scsi2_reserve_release ...passed
00:06:12.856    Test: test_pr_with_scsi2_reserve_release ...[2024-12-13 23:39:43.506937] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:06:12.856  passed
00:06:12.856  
00:06:12.856  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:12.856                suites      1      1    n/a      0        0
00:06:12.856                 tests      7      7      7      0        0
00:06:12.856               asserts    257    257    257      0      n/a
00:06:12.856  
00:06:12.856  Elapsed time =    0.002 seconds
00:06:12.856  
00:06:12.856  real	0m0.164s
00:06:12.856  user	0m0.072s
00:06:12.856  sys	0m0.095s
00:06:12.856   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.856   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.856  ************************************
00:06:12.856  END TEST unittest_scsi
00:06:12.856  ************************************
00:06:12.856    23:39:43	-- unit/unittest.sh@252 -- # uname -s
00:06:12.856   23:39:43	-- unit/unittest.sh@252 -- # '[' Linux = Linux ']'
00:06:12.856   23:39:43	-- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock
00:06:12.856   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.856   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.856   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:12.856  ************************************
00:06:12.856  START TEST unittest_sock
00:06:12.856  ************************************
00:06:12.856   23:39:43	-- common/autotest_common.sh@1114 -- # unittest_sock
00:06:12.856   23:39:43	-- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut
00:06:13.115  
00:06:13.115  
00:06:13.115       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.115       http://cunit.sourceforge.net/
00:06:13.115  
00:06:13.115  
00:06:13.115  Suite: sock
00:06:13.115    Test: posix_sock ...passed
00:06:13.115    Test: ut_sock ...passed
00:06:13.115    Test: posix_sock_group ...passed
00:06:13.115    Test: ut_sock_group ...passed
00:06:13.115    Test: posix_sock_group_fairness ...passed
00:06:13.115    Test: _posix_sock_close ...passed
00:06:13.115    Test: sock_get_default_opts ...passed
00:06:13.115    Test: ut_sock_impl_get_set_opts ...passed
00:06:13.115    Test: posix_sock_impl_get_set_opts ...passed
00:06:13.115    Test: ut_sock_map ...passed
00:06:13.115    Test: override_impl_opts ...passed
00:06:13.115    Test: ut_sock_group_get_ctx ...passed
00:06:13.115  
00:06:13.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.115                suites      1      1    n/a      0        0
00:06:13.115                 tests     12     12     12      0        0
00:06:13.115               asserts    349    349    349      0      n/a
00:06:13.115  
00:06:13.115  Elapsed time =    0.009 seconds
00:06:13.115   23:39:43	-- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut
00:06:13.115  
00:06:13.115  
00:06:13.115       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.115       http://cunit.sourceforge.net/
00:06:13.115  
00:06:13.115  
00:06:13.115  Suite: posix
00:06:13.115    Test: flush ...passed
00:06:13.115  
00:06:13.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.115                suites      1      1    n/a      0        0
00:06:13.115                 tests      1      1      1      0        0
00:06:13.115               asserts     28     28     28      0      n/a
00:06:13.115  
00:06:13.115  Elapsed time =    0.000 seconds
00:06:13.115   23:39:43	-- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:13.115  
00:06:13.115  real	0m0.095s
00:06:13.115  user	0m0.034s
00:06:13.115  sys	0m0.038s
00:06:13.116   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:13.116   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.116  ************************************
00:06:13.116  END TEST unittest_sock
00:06:13.116  ************************************
00:06:13.116   23:39:43	-- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:06:13.116   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:13.116   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:13.116   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.116  ************************************
00:06:13.116  START TEST unittest_thread
00:06:13.116  ************************************
00:06:13.116   23:39:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:06:13.116  
00:06:13.116  
00:06:13.116       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.116       http://cunit.sourceforge.net/
00:06:13.116  
00:06:13.116  
00:06:13.116  Suite: io_channel
00:06:13.116    Test: thread_alloc ...passed
00:06:13.116    Test: thread_send_msg ...passed
00:06:13.116    Test: thread_poller ...passed
00:06:13.116    Test: poller_pause ...passed
00:06:13.116    Test: thread_for_each ...passed
00:06:13.116    Test: for_each_channel_remove ...passed
00:06:13.116    Test: for_each_channel_unreg ...[2024-12-13 23:39:43.763676] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7ffd5a47be80 already registered (old:0x613000000200 new:0x6130000003c0)
00:06:13.116  passed
00:06:13.116    Test: thread_name ...passed
00:06:13.116    Test: channel ...[2024-12-13 23:39:43.768102] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x5581e0cdc0e0
00:06:13.116  passed
00:06:13.116    Test: channel_destroy_races ...passed
00:06:13.116    Test: thread_exit_test ...[2024-12-13 23:39:43.773817] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully
00:06:13.116  passed
00:06:13.116    Test: thread_update_stats_test ...passed
00:06:13.116    Test: nested_channel ...passed
00:06:13.116    Test: device_unregister_and_thread_exit_race ...passed
00:06:13.116    Test: cache_closest_timed_poller ...passed
00:06:13.116    Test: multi_timed_pollers_have_same_expiration ...passed
00:06:13.116    Test: io_device_lookup ...passed
00:06:13.116    Test: spdk_spin ...[2024-12-13 23:39:43.785821] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:06:13.116  [2024-12-13 23:39:43.785888] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffd5a47be70
00:06:13.116  [2024-12-13 23:39:43.785995] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:06:13.116  [2024-12-13 23:39:43.787925] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:06:13.116  [2024-12-13 23:39:43.788022] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffd5a47be70
00:06:13.116  [2024-12-13 23:39:43.788065] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:06:13.116  [2024-12-13 23:39:43.788110] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffd5a47be70
00:06:13.116  [2024-12-13 23:39:43.788149] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:06:13.116  [2024-12-13 23:39:43.788195] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffd5a47be70
00:06:13.116  [2024-12-13 23:39:43.788704] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:06:13.116  [2024-12-13 23:39:43.788776] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffd5a47be70
00:06:13.116  passed
00:06:13.116    Test: for_each_channel_and_thread_exit_race ...passed
00:06:13.116    Test: for_each_thread_and_thread_exit_race ...passed
00:06:13.116  
00:06:13.116  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.116                suites      1      1    n/a      0        0
00:06:13.116                 tests     20     20     20      0        0
00:06:13.116               asserts    409    409    409      0      n/a
00:06:13.116  
00:06:13.116  Elapsed time =    0.054 seconds
00:06:13.116  
00:06:13.116  real	0m0.090s
00:06:13.116  user	0m0.058s
00:06:13.116  sys	0m0.033s
00:06:13.116   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:13.116   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.116  ************************************
00:06:13.116  END TEST unittest_thread
00:06:13.116  ************************************
00:06:13.375   23:39:43	-- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:06:13.375   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:13.375   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:13.375   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.375  ************************************
00:06:13.375  START TEST unittest_iobuf
00:06:13.375  ************************************
00:06:13.375   23:39:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:06:13.375  
00:06:13.375  
00:06:13.375       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.375       http://cunit.sourceforge.net/
00:06:13.375  
00:06:13.375  
00:06:13.375  Suite: io_channel
00:06:13.375    Test: iobuf ...passed
00:06:13.375    Test: iobuf_cache ...[2024-12-13 23:39:43.893107] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:06:13.375  [2024-12-13 23:39:43.893449] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:06:13.375  [2024-12-13 23:39:43.893646] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:06:13.375  [2024-12-13 23:39:43.893725] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:06:13.375  [2024-12-13 23:39:43.893860] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:06:13.375  [2024-12-13 23:39:43.893963] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:06:13.375  passed
00:06:13.375  
00:06:13.375  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.375                suites      1      1    n/a      0        0
00:06:13.375                 tests      2      2      2      0        0
00:06:13.375               asserts    107    107    107      0      n/a
00:06:13.375  
00:06:13.375  Elapsed time =    0.006 seconds
00:06:13.375  
00:06:13.375  real	0m0.039s
00:06:13.375  user	0m0.021s
00:06:13.375  sys	0m0.018s
00:06:13.375   23:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:13.375  ************************************
00:06:13.375  END TEST unittest_iobuf
00:06:13.375   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.375  ************************************
00:06:13.375   23:39:43	-- unit/unittest.sh@257 -- # run_test unittest_util unittest_util
00:06:13.375   23:39:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:13.375   23:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:13.375   23:39:43	-- common/autotest_common.sh@10 -- # set +x
00:06:13.375  ************************************
00:06:13.375  START TEST unittest_util
00:06:13.375  ************************************
00:06:13.375   23:39:43	-- common/autotest_common.sh@1114 -- # unittest_util
00:06:13.375   23:39:43	-- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:06:13.375  
00:06:13.375  
00:06:13.375       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.375       http://cunit.sourceforge.net/
00:06:13.375  
00:06:13.375  
00:06:13.375  Suite: base64
00:06:13.375    Test: test_base64_get_encoded_strlen ...passed
00:06:13.375    Test: test_base64_get_decoded_len ...passed
00:06:13.375    Test: test_base64_encode ...passed
00:06:13.375    Test: test_base64_decode ...passed
00:06:13.375    Test: test_base64_urlsafe_encode ...passed
00:06:13.375    Test: test_base64_urlsafe_decode ...passed
00:06:13.375  
00:06:13.375  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.375                suites      1      1    n/a      0        0
00:06:13.375                 tests      6      6      6      0        0
00:06:13.375               asserts    112    112    112      0      n/a
00:06:13.375  
00:06:13.375  Elapsed time =    0.000 seconds
00:06:13.375   23:39:43	-- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:06:13.375  
00:06:13.375  
00:06:13.375       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.375       http://cunit.sourceforge.net/
00:06:13.375  
00:06:13.375  
00:06:13.375  Suite: bit_array
00:06:13.375    Test: test_1bit ...passed
00:06:13.375    Test: test_64bit ...passed
00:06:13.375    Test: test_find ...passed
00:06:13.375    Test: test_resize ...passed
00:06:13.376    Test: test_errors ...passed
00:06:13.376    Test: test_count ...passed
00:06:13.376    Test: test_mask_store_load ...passed
00:06:13.376    Test: test_mask_clear ...passed
00:06:13.376  
00:06:13.376  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.376                suites      1      1    n/a      0        0
00:06:13.376                 tests      8      8      8      0        0
00:06:13.376               asserts   5075   5075   5075      0      n/a
00:06:13.376  
00:06:13.376  Elapsed time =    0.002 seconds
00:06:13.376   23:39:44	-- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:06:13.376  
00:06:13.376  
00:06:13.376       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.376       http://cunit.sourceforge.net/
00:06:13.376  
00:06:13.376  
00:06:13.376  Suite: cpuset
00:06:13.376    Test: test_cpuset ...passed
00:06:13.376    Test: test_cpuset_parse ...[2024-12-13 23:39:44.030330] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '['
00:06:13.376  [2024-12-13 23:39:44.031009] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:06:13.376  [2024-12-13 23:39:44.031170] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:06:13.376  [2024-12-13 23:39:44.031625] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:06:13.376  [2024-12-13 23:39:44.031720] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:06:13.376  [2024-12-13 23:39:44.031768] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:06:13.376  [2024-12-13 23:39:44.031823] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:06:13.376  [2024-12-13 23:39:44.031887] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:06:13.376  passed
00:06:13.376    Test: test_cpuset_fmt ...passed
00:06:13.376  
00:06:13.376  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.376                suites      1      1    n/a      0        0
00:06:13.376                 tests      3      3      3      0        0
00:06:13.376               asserts     65     65     65      0      n/a
00:06:13.376  
00:06:13.376  Elapsed time =    0.004 seconds
00:06:13.376   23:39:44	-- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:06:13.376  
00:06:13.376  
00:06:13.376       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.376       http://cunit.sourceforge.net/
00:06:13.376  
00:06:13.376  
00:06:13.376  Suite: crc16
00:06:13.376    Test: test_crc16_t10dif ...passed
00:06:13.376    Test: test_crc16_t10dif_seed ...passed
00:06:13.376    Test: test_crc16_t10dif_copy ...passed
00:06:13.376  
00:06:13.376  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.376                suites      1      1    n/a      0        0
00:06:13.376                 tests      3      3      3      0        0
00:06:13.376               asserts      5      5      5      0      n/a
00:06:13.376  
00:06:13.376  Elapsed time =    0.000 seconds
00:06:13.376   23:39:44	-- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:06:13.376  
00:06:13.376  
00:06:13.376       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.376       http://cunit.sourceforge.net/
00:06:13.376  
00:06:13.376  
00:06:13.376  Suite: crc32_ieee
00:06:13.376    Test: test_crc32_ieee ...passed
00:06:13.376  
00:06:13.376  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.376                suites      1      1    n/a      0        0
00:06:13.376                 tests      1      1      1      0        0
00:06:13.376               asserts      1      1      1      0      n/a
00:06:13.376  
00:06:13.376  Elapsed time =    0.000 seconds
00:06:13.376   23:39:44	-- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:06:13.636  
00:06:13.636  
00:06:13.636       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.636       http://cunit.sourceforge.net/
00:06:13.636  
00:06:13.636  
00:06:13.636  Suite: crc32c
00:06:13.636    Test: test_crc32c ...passed
00:06:13.636    Test: test_crc32c_nvme ...passed
00:06:13.636  
00:06:13.637  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.637                suites      1      1    n/a      0        0
00:06:13.637                 tests      2      2      2      0        0
00:06:13.637               asserts     16     16     16      0      n/a
00:06:13.637  
00:06:13.637  Elapsed time =    0.001 seconds
00:06:13.637   23:39:44	-- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:06:13.637  
00:06:13.637  
00:06:13.637       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.637       http://cunit.sourceforge.net/
00:06:13.637  
00:06:13.637  
00:06:13.637  Suite: crc64
00:06:13.637    Test: test_crc64_nvme ...passed
00:06:13.637  
00:06:13.637  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.637                suites      1      1    n/a      0        0
00:06:13.637                 tests      1      1      1      0        0
00:06:13.637               asserts      4      4      4      0      n/a
00:06:13.637  
00:06:13.637  Elapsed time =    0.000 seconds
00:06:13.637   23:39:44	-- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:06:13.637  
00:06:13.637  
00:06:13.637       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.637       http://cunit.sourceforge.net/
00:06:13.637  
00:06:13.637  
00:06:13.637  Suite: string
00:06:13.637    Test: test_parse_ip_addr ...passed
00:06:13.637    Test: test_str_chomp ...passed
00:06:13.637    Test: test_parse_capacity ...passed
00:06:13.637    Test: test_sprintf_append_realloc ...passed
00:06:13.637    Test: test_strtol ...passed
00:06:13.637    Test: test_strtoll ...passed
00:06:13.637    Test: test_strarray ...passed
00:06:13.637    Test: test_strcpy_replace ...passed
00:06:13.637  
00:06:13.637  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.637                suites      1      1    n/a      0        0
00:06:13.637                 tests      8      8      8      0        0
00:06:13.637               asserts    161    161    161      0      n/a
00:06:13.637  
00:06:13.637  Elapsed time =    0.001 seconds
00:06:13.637   23:39:44	-- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:06:13.637  
00:06:13.637  
00:06:13.637       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.637       http://cunit.sourceforge.net/
00:06:13.637  
00:06:13.637  
00:06:13.637  Suite: dif
00:06:13.637    Test: dif_generate_and_verify_test ...[2024-12-13 23:39:44.196829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:06:13.637  [2024-12-13 23:39:44.197402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:06:13.637  [2024-12-13 23:39:44.197769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:06:13.637  [2024-12-13 23:39:44.198067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:06:13.637  [2024-12-13 23:39:44.198367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:06:13.637  [2024-12-13 23:39:44.198669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:06:13.637  passed
00:06:13.637    Test: dif_disable_check_test ...[2024-12-13 23:39:44.199739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:06:13.637  [2024-12-13 23:39:44.200141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:06:13.637  [2024-12-13 23:39:44.200436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:06:13.637  passed
00:06:13.637    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-12-13 23:39:44.201497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:06:13.637  [2024-12-13 23:39:44.201849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:06:13.637  [2024-12-13 23:39:44.202192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:06:13.637  [2024-12-13 23:39:44.202557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:06:13.637  [2024-12-13 23:39:44.202956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:06:13.637  [2024-12-13 23:39:44.203287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:06:13.637  [2024-12-13 23:39:44.203605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:06:13.637  [2024-12-13 23:39:44.203919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:06:13.637  [2024-12-13 23:39:44.204230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:06:13.637  [2024-12-13 23:39:44.204561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:06:13.637  [2024-12-13 23:39:44.204907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:06:13.637  passed
00:06:13.637    Test: dif_apptag_mask_test ...[2024-12-13 23:39:44.205240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:06:13.637  [2024-12-13 23:39:44.205552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:06:13.637  passed
00:06:13.637    Test: dif_sec_512_md_0_error_test ...[2024-12-13 23:39:44.205798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:06:13.637  passed
00:06:13.637    Test: dif_sec_4096_md_0_error_test ...[2024-12-13 23:39:44.205851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:06:13.637  [2024-12-13 23:39:44.205899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:06:13.637  passed
00:06:13.637    Test: dif_sec_4100_md_128_error_test ...[2024-12-13 23:39:44.205977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB
00:06:13.637  passed
00:06:13.637    Test: dif_guard_seed_test ...passed
00:06:13.637    Test: dif_guard_value_test ...[2024-12-13 23:39:44.206021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB
00:06:13.637  passed
00:06:13.637    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:06:13.637    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:06:13.637    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-13 23:39:44.250638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=f94c, Actual=fd4c
00:06:13.637  [2024-12-13 23:39:44.253138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fa21, Actual=fe21
00:06:13.637  [2024-12-13 23:39:44.255646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.258147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.260680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.637  [2024-12-13 23:39:44.263178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.637  [2024-12-13 23:39:44.265662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=c01f
00:06:13.637  [2024-12-13 23:39:44.267984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fe21, Actual=d14d
00:06:13.637  [2024-12-13 23:39:44.270342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.637  [2024-12-13 23:39:44.272814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=3c574660, Actual=38574660
00:06:13.637  [2024-12-13 23:39:44.275336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.277814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.280291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.637  [2024-12-13 23:39:44.282790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.637  [2024-12-13 23:39:44.285256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.637  [2024-12-13 23:39:44.287604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=38574660, Actual=bddc0874
00:06:13.637  [2024-12-13 23:39:44.289993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.637  [2024-12-13 23:39:44.292491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.637  [2024-12-13 23:39:44.294977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.297450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.637  [2024-12-13 23:39:44.299937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.638  [2024-12-13 23:39:44.302425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.638  [2024-12-13 23:39:44.304919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.638  [2024-12-13 23:39:44.307299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.638  passed
00:06:13.638    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-12-13 23:39:44.308740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.638  [2024-12-13 23:39:44.309063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fa21, Actual=fe21
00:06:13.638  [2024-12-13 23:39:44.309376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.309696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.310022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.310325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.310631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.638  [2024-12-13 23:39:44.310929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=d14d
00:06:13.638  [2024-12-13 23:39:44.311234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.638  [2024-12-13 23:39:44.311537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3c574660, Actual=38574660
00:06:13.638  [2024-12-13 23:39:44.311850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.312155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.312453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.312754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.313056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.638  [2024-12-13 23:39:44.313350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=bddc0874
00:06:13.638  [2024-12-13 23:39:44.313676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.638  [2024-12-13 23:39:44.313977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.638  [2024-12-13 23:39:44.314278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.314588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.314900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.315201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.315512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.638  [2024-12-13 23:39:44.315825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.638  passed
00:06:13.638    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-12-13 23:39:44.316167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.638  [2024-12-13 23:39:44.316474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fa21, Actual=fe21
00:06:13.638  [2024-12-13 23:39:44.316776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.317083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.317400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.317725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.318042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.638  [2024-12-13 23:39:44.318344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=d14d
00:06:13.638  [2024-12-13 23:39:44.318638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.638  [2024-12-13 23:39:44.318946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3c574660, Actual=38574660
00:06:13.638  [2024-12-13 23:39:44.319255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.319555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.319857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.320157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.320446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.638  [2024-12-13 23:39:44.320738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=bddc0874
00:06:13.638  [2024-12-13 23:39:44.321054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.638  [2024-12-13 23:39:44.321347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.638  [2024-12-13 23:39:44.321661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.321969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.322272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.322571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.322905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.638  [2024-12-13 23:39:44.323197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.638  passed
00:06:13.638    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-12-13 23:39:44.323538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.638  [2024-12-13 23:39:44.323871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fa21, Actual=fe21
00:06:13.638  [2024-12-13 23:39:44.324185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.324482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.324811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.325118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.638  [2024-12-13 23:39:44.325426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.638  [2024-12-13 23:39:44.325736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=d14d
00:06:13.638  [2024-12-13 23:39:44.326041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.638  [2024-12-13 23:39:44.326343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3c574660, Actual=38574660
00:06:13.638  [2024-12-13 23:39:44.326666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.326987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.327306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.327615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.638  [2024-12-13 23:39:44.327921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.638  [2024-12-13 23:39:44.328223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=bddc0874
00:06:13.638  [2024-12-13 23:39:44.328529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.638  [2024-12-13 23:39:44.328841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.638  [2024-12-13 23:39:44.329138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.329452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.638  [2024-12-13 23:39:44.329769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.330089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.638  [2024-12-13 23:39:44.330412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.638  [2024-12-13 23:39:44.330723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.638  passed
00:06:13.638    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-12-13 23:39:44.331066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.639  [2024-12-13 23:39:44.331368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fa21, Actual=fe21
00:06:13.639  [2024-12-13 23:39:44.331674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.331983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.332307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.639  [2024-12-13 23:39:44.332609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.639  [2024-12-13 23:39:44.332904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.639  [2024-12-13 23:39:44.333204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=d14d
00:06:13.639  passed
00:06:13.639    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-12-13 23:39:44.333550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.639  [2024-12-13 23:39:44.333873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3c574660, Actual=38574660
00:06:13.639  [2024-12-13 23:39:44.334196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.334504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.334814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.639  [2024-12-13 23:39:44.335122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.639  [2024-12-13 23:39:44.335429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.639  [2024-12-13 23:39:44.335727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=bddc0874
00:06:13.639  [2024-12-13 23:39:44.336073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.639  [2024-12-13 23:39:44.336406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.639  [2024-12-13 23:39:44.336707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.337017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.337321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.639  [2024-12-13 23:39:44.337637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.639  [2024-12-13 23:39:44.337967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.639  [2024-12-13 23:39:44.338275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.639  passed
00:06:13.639    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-12-13 23:39:44.338614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.639  [2024-12-13 23:39:44.338927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fa21, Actual=fe21
00:06:13.639  [2024-12-13 23:39:44.339231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.339538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.339865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.639  [2024-12-13 23:39:44.340167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.639  [2024-12-13 23:39:44.340466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.639  [2024-12-13 23:39:44.340758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=d14d
00:06:13.639  passed
00:06:13.639    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-12-13 23:39:44.341101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.639  [2024-12-13 23:39:44.341400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3c574660, Actual=38574660
00:06:13.639  [2024-12-13 23:39:44.341736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.342059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.342375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.639  [2024-12-13 23:39:44.342675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.639  [2024-12-13 23:39:44.342998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.639  [2024-12-13 23:39:44.343294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=bddc0874
00:06:13.639  [2024-12-13 23:39:44.343634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.639  [2024-12-13 23:39:44.343941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=8c010a2d4837a266, Actual=88010a2d4837a266
00:06:13.639  [2024-12-13 23:39:44.344242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.344542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.639  [2024-12-13 23:39:44.344850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.639  [2024-12-13 23:39:44.345153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.639  [2024-12-13 23:39:44.345472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.639  [2024-12-13 23:39:44.345805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=85b96947b509a18
00:06:13.639  passed
00:06:13.639    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed
00:06:13.639    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:06:13.639    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:06:13.899    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:06:13.899    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-13 23:39:44.390038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=f94c, Actual=fd4c
00:06:13.899  [2024-12-13 23:39:44.391185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4d2, Actual=d2
00:06:13.899  [2024-12-13 23:39:44.392318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.393477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.394633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.899  [2024-12-13 23:39:44.395783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.899  [2024-12-13 23:39:44.396915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=c01f
00:06:13.899  [2024-12-13 23:39:44.398062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=e4c1, Actual=cbad
00:06:13.899  [2024-12-13 23:39:44.399232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.899  [2024-12-13 23:39:44.400405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=db72f83e, Actual=df72f83e
00:06:13.899  [2024-12-13 23:39:44.401553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.402753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.403888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.899  [2024-12-13 23:39:44.405045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.899  [2024-12-13 23:39:44.406193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.899  [2024-12-13 23:39:44.407344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=741688fe, Actual=f19dc6ea
00:06:13.899  [2024-12-13 23:39:44.408497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.899  [2024-12-13 23:39:44.409668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=57e3d76652eea696, Actual=53e3d76652eea696
00:06:13.899  [2024-12-13 23:39:44.410832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.411989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.413112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.899  [2024-12-13 23:39:44.414265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.899  [2024-12-13 23:39:44.415438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.899  [2024-12-13 23:39:44.416615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4c87c8f68d0ed55f, Actual=ccdd544fbe69ed21
00:06:13.899  passed
00:06:13.899    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-13 23:39:44.416987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.899  [2024-12-13 23:39:44.417264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=cd05, Actual=c905
00:06:13.899  [2024-12-13 23:39:44.417538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.417826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.899  [2024-12-13 23:39:44.418110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.900  [2024-12-13 23:39:44.418408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.900  [2024-12-13 23:39:44.418674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.900  [2024-12-13 23:39:44.418969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=27a
00:06:13.900  [2024-12-13 23:39:44.419241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.900  [2024-12-13 23:39:44.419519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=45c2306c, Actual=41c2306c
00:06:13.900  [2024-12-13 23:39:44.419796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.420074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.420345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.900  [2024-12-13 23:39:44.420622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.900  [2024-12-13 23:39:44.420883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.900  [2024-12-13 23:39:44.421154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=6f2d0eb8
00:06:13.900  [2024-12-13 23:39:44.421435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.900  [2024-12-13 23:39:44.421715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2302358b893a436b, Actual=2702358b893a436b
00:06:13.900  [2024-12-13 23:39:44.421983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.422250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.422523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.900  [2024-12-13 23:39:44.422813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.900  [2024-12-13 23:39:44.423098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.900  [2024-12-13 23:39:44.423376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=b83cb6a265bd08dc
00:06:13.900  passed
00:06:13.900    Test: dix_sec_512_md_0_error ...[2024-12-13 23:39:44.423458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:06:13.900  passed
00:06:13.900    Test: dix_sec_512_md_8_prchk_0_single_iov ...passed
00:06:13.900    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:06:13.900    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:06:13.900    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:06:13.900    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:06:13.900    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:06:13.900    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:06:13.900    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:06:13.900    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:06:13.900    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-13 23:39:44.466932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=f94c, Actual=fd4c
00:06:13.900  [2024-12-13 23:39:44.468092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4d2, Actual=d2
00:06:13.900  [2024-12-13 23:39:44.469220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.470375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.471546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.900  [2024-12-13 23:39:44.472681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060
00:06:13.900  [2024-12-13 23:39:44.473827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=c01f
00:06:13.900  [2024-12-13 23:39:44.474961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=e4c1, Actual=cbad
00:06:13.900  [2024-12-13 23:39:44.476071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.900  [2024-12-13 23:39:44.477214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=db72f83e, Actual=df72f83e
00:06:13.900  [2024-12-13 23:39:44.478397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.479382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.480292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.900  [2024-12-13 23:39:44.481207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=400000000000060
00:06:13.900  [2024-12-13 23:39:44.482132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.900  [2024-12-13 23:39:44.483060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=741688fe, Actual=f19dc6ea
00:06:13.900  [2024-12-13 23:39:44.483983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.900  [2024-12-13 23:39:44.484882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=57e3d76652eea696, Actual=53e3d76652eea696
00:06:13.900  [2024-12-13 23:39:44.485810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.486730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.487587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.900  [2024-12-13 23:39:44.488502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460
00:06:13.900  [2024-12-13 23:39:44.489398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.900  passed
00:06:13.900    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-13 23:39:44.490320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4c87c8f68d0ed55f, Actual=ccdd544fbe69ed21
00:06:13.900  [2024-12-13 23:39:44.490625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=f94c, Actual=fd4c
00:06:13.900  [2024-12-13 23:39:44.490836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=cd05, Actual=c905
00:06:13.900  [2024-12-13 23:39:44.491054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.491264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.491476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.900  [2024-12-13 23:39:44.491682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058
00:06:13.900  [2024-12-13 23:39:44.491879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=c01f
00:06:13.900  [2024-12-13 23:39:44.492085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=27a
00:06:13.900  [2024-12-13 23:39:44.492284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1eb753ed, Actual=1ab753ed
00:06:13.900  [2024-12-13 23:39:44.492480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=45c2306c, Actual=41c2306c
00:06:13.900  [2024-12-13 23:39:44.492693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.492892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.493079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.900  [2024-12-13 23:39:44.493278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058
00:06:13.900  [2024-12-13 23:39:44.493472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=6e06aa1
00:06:13.900  [2024-12-13 23:39:44.493688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=6f2d0eb8
00:06:13.900  [2024-12-13 23:39:44.493898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3
00:06:13.900  [2024-12-13 23:39:44.494108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2302358b893a436b, Actual=2702358b893a436b
00:06:13.900  [2024-12-13 23:39:44.494307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.494513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=488
00:06:13.900  [2024-12-13 23:39:44.494714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.900  [2024-12-13 23:39:44.494915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458
00:06:13.900  [2024-12-13 23:39:44.495114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=76a02e4a05959a93
00:06:13.900  [2024-12-13 23:39:44.495315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=b83cb6a265bd08dc
00:06:13.900  passed
00:06:13.900    Test: set_md_interleave_iovs_test ...passed
00:06:13.900    Test: set_md_interleave_iovs_split_test ...passed
00:06:13.900    Test: dif_generate_stream_pi_16_test ...passed
00:06:13.900    Test: dif_generate_stream_test ...passed
00:06:13.900    Test: set_md_interleave_iovs_alignment_test ...[2024-12-13 23:39:44.500640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:06:13.900  passed
00:06:13.900    Test: dif_generate_split_test ...passed
00:06:13.900    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:06:13.900    Test: dif_verify_split_test ...passed
00:06:13.900    Test: dif_verify_stream_multi_segments_test ...passed
00:06:13.900    Test: update_crc32c_pi_16_test ...passed
00:06:13.901    Test: update_crc32c_test ...passed
00:06:13.901    Test: dif_update_crc32c_split_test ...passed
00:06:13.901    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:06:13.901    Test: get_range_with_md_test ...passed
00:06:13.901    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:06:13.901    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:06:13.901    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:06:13.901    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:06:13.901    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:06:13.901    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:06:13.901    Test: dif_generate_and_verify_unmap_test ...passed
00:06:13.901  
00:06:13.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.901                suites      1      1    n/a      0        0
00:06:13.901                 tests     79     79     79      0        0
00:06:13.901               asserts   3584   3584   3584      0      n/a
00:06:13.901  
00:06:13.901  Elapsed time =    0.336 seconds
00:06:13.901   23:39:44	-- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:06:13.901  
00:06:13.901  
00:06:13.901       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.901       http://cunit.sourceforge.net/
00:06:13.901  
00:06:13.901  
00:06:13.901  Suite: iov
00:06:13.901    Test: test_single_iov ...passed
00:06:13.901    Test: test_simple_iov ...passed
00:06:13.901    Test: test_complex_iov ...passed
00:06:13.901    Test: test_iovs_to_buf ...passed
00:06:13.901    Test: test_buf_to_iovs ...passed
00:06:13.901    Test: test_memset ...passed
00:06:13.901    Test: test_iov_one ...passed
00:06:13.901    Test: test_iov_xfer ...passed
00:06:13.901  
00:06:13.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.901                suites      1      1    n/a      0        0
00:06:13.901                 tests      8      8      8      0        0
00:06:13.901               asserts    156    156    156      0      n/a
00:06:13.901  
00:06:13.901  Elapsed time =    0.001 seconds
00:06:13.901   23:39:44	-- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:06:13.901  
00:06:13.901  
00:06:13.901       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.901       http://cunit.sourceforge.net/
00:06:13.901  
00:06:13.901  
00:06:13.901  Suite: math
00:06:13.901    Test: test_serial_number_arithmetic ...passed
00:06:13.901  Suite: erase
00:06:13.901    Test: test_memset_s ...passed
00:06:13.901  
00:06:13.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.901                suites      2      2    n/a      0        0
00:06:13.901                 tests      2      2      2      0        0
00:06:13.901               asserts     18     18     18      0      n/a
00:06:13.901  
00:06:13.901  Elapsed time =    0.000 seconds
00:06:13.901   23:39:44	-- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:06:13.901  
00:06:13.901  
00:06:13.901       CUnit - A unit testing framework for C - Version 2.1-3
00:06:13.901       http://cunit.sourceforge.net/
00:06:13.901  
00:06:13.901  
00:06:13.901  Suite: pipe
00:06:13.901    Test: test_create_destroy ...passed
00:06:13.901    Test: test_write_get_buffer ...passed
00:06:13.901    Test: test_write_advance ...passed
00:06:13.901    Test: test_read_get_buffer ...passed
00:06:13.901    Test: test_read_advance ...passed
00:06:13.901    Test: test_data ...passed
00:06:13.901  
00:06:13.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:13.901                suites      1      1    n/a      0        0
00:06:13.901                 tests      6      6      6      0        0
00:06:13.901               asserts    250    250    250      0      n/a
00:06:13.901  
00:06:13.901  Elapsed time =    0.001 seconds
00:06:13.901   23:39:44	-- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut
00:06:14.160  
00:06:14.160  
00:06:14.160       CUnit - A unit testing framework for C - Version 2.1-3
00:06:14.160       http://cunit.sourceforge.net/
00:06:14.160  
00:06:14.160  
00:06:14.160  Suite: xor
00:06:14.160    Test: test_xor_gen ...passed
00:06:14.160  
00:06:14.160  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:14.160                suites      1      1    n/a      0        0
00:06:14.160                 tests      1      1      1      0        0
00:06:14.160               asserts     17     17     17      0      n/a
00:06:14.160  
00:06:14.160  Elapsed time =    0.004 seconds
00:06:14.160  
00:06:14.160  real	0m0.693s
00:06:14.160  user	0m0.517s
00:06:14.160  sys	0m0.181s
00:06:14.160   23:39:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.160  ************************************
00:06:14.160  END TEST unittest_util
00:06:14.160  ************************************
00:06:14.160   23:39:44	-- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:14.160   23:39:44	-- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:06:14.160   23:39:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:14.160   23:39:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.160  ************************************
00:06:14.160  START TEST unittest_vhost
00:06:14.160  ************************************
00:06:14.160   23:39:44	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:06:14.160  
00:06:14.160  
00:06:14.160       CUnit - A unit testing framework for C - Version 2.1-3
00:06:14.160       http://cunit.sourceforge.net/
00:06:14.160  
00:06:14.160  
00:06:14.160  Suite: vhost_suite
00:06:14.160    Test: desc_to_iov_test ...[2024-12-13 23:39:44.731975] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached
00:06:14.160  passed
00:06:14.160    Test: create_controller_test ...[2024-12-13 23:39:44.736288] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:06:14.160  [2024-12-13 23:39:44.736425] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:06:14.160  [2024-12-13 23:39:44.736552] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:06:14.160  [2024-12-13 23:39:44.736667] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:06:14.160  [2024-12-13 23:39:44.736721] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name
00:06:14.160  [2024-12-13 23:39:44.736823] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-12-13 23:39:44.737821] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists.
00:06:14.160  passed
00:06:14.160    Test: session_find_by_vid_test ...passed
00:06:14.160    Test: remove_controller_test ...[2024-12-13 23:39:44.739804] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection.
00:06:14.160  passed
00:06:14.160    Test: vq_avail_ring_get_test ...passed
00:06:14.160    Test: vq_packed_ring_test ...passed
00:06:14.160    Test: vhost_blk_construct_test ...passed
00:06:14.160  
00:06:14.160  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:14.160                suites      1      1    n/a      0        0
00:06:14.160                 tests      7      7      7      0        0
00:06:14.160               asserts    145    145    145      0      n/a
00:06:14.160  
00:06:14.160  Elapsed time =    0.012 seconds
00:06:14.160  
00:06:14.160  real	0m0.049s
00:06:14.160  user	0m0.037s
00:06:14.160  sys	0m0.012s
00:06:14.160   23:39:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.160  ************************************
00:06:14.160  END TEST unittest_vhost
00:06:14.160  ************************************
00:06:14.160   23:39:44	-- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:06:14.160   23:39:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:14.160   23:39:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.160  ************************************
00:06:14.160  START TEST unittest_dma
00:06:14.160  ************************************
00:06:14.160   23:39:44	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:06:14.160  
00:06:14.160  
00:06:14.160       CUnit - A unit testing framework for C - Version 2.1-3
00:06:14.160       http://cunit.sourceforge.net/
00:06:14.160  
00:06:14.160  
00:06:14.160  Suite: dma_suite
00:06:14.160    Test: test_dma ...[2024-12-13 23:39:44.832397] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  37:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:06:14.160  passed
00:06:14.160  
00:06:14.160  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:14.160                suites      1      1    n/a      0        0
00:06:14.160                 tests      1      1      1      0        0
00:06:14.160               asserts     50     50     50      0      n/a
00:06:14.160  
00:06:14.160  Elapsed time =    0.000 seconds
00:06:14.160  
00:06:14.160  real	0m0.030s
00:06:14.160  user	0m0.016s
00:06:14.160  sys	0m0.014s
00:06:14.160   23:39:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.160  ************************************
00:06:14.160  END TEST unittest_dma
00:06:14.160  ************************************
00:06:14.160   23:39:44	-- unit/unittest.sh@263 -- # run_test unittest_init unittest_init
00:06:14.160   23:39:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:14.160   23:39:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:14.160   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.419  ************************************
00:06:14.419  START TEST unittest_init
00:06:14.419  ************************************
00:06:14.419   23:39:44	-- common/autotest_common.sh@1114 -- # unittest_init
00:06:14.419   23:39:44	-- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:06:14.419  
00:06:14.419  
00:06:14.419       CUnit - A unit testing framework for C - Version 2.1-3
00:06:14.419       http://cunit.sourceforge.net/
00:06:14.419  
00:06:14.419  
00:06:14.419  Suite: subsystem_suite
00:06:14.419    Test: subsystem_sort_test_depends_on_single ...passed
00:06:14.419    Test: subsystem_sort_test_depends_on_multiple ...passed
00:06:14.419    Test: subsystem_sort_test_missing_dependency ...[2024-12-13 23:39:44.922239] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:06:14.419  [2024-12-13 23:39:44.922922] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:06:14.419  passed
00:06:14.419  
00:06:14.419  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:14.419                suites      1      1    n/a      0        0
00:06:14.419                 tests      3      3      3      0        0
00:06:14.419               asserts     20     20     20      0      n/a
00:06:14.419  
00:06:14.419  Elapsed time =    0.001 seconds
00:06:14.419  
00:06:14.419  real	0m0.039s
00:06:14.419  user	0m0.027s
00:06:14.419  sys	0m0.012s
00:06:14.419   23:39:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:14.419   23:39:44	-- common/autotest_common.sh@10 -- # set +x
00:06:14.419  ************************************
00:06:14.419  END TEST unittest_init
00:06:14.420  ************************************
00:06:14.420   23:39:44	-- unit/unittest.sh@265 -- # [[ y == y ]]
00:06:14.420    23:39:44	-- unit/unittest.sh@266 -- # hostname
00:06:14.420   23:39:44	-- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:06:14.678  geninfo: WARNING: invalid characters removed from testname!
00:06:36.643   23:40:07	-- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:06:40.833   23:40:11	-- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:44.118   23:40:14	-- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:46.648   23:40:16	-- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:49.179   23:40:19	-- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:51.080   23:40:21	-- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:53.608   23:40:23	-- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:06:53.608   23:40:23	-- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:06:54.176  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:06:54.176  Found 309 entries.
00:06:54.176  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:06:54.176  Writing .css and .png files.
00:06:54.176  Generating output.
00:06:54.176  Processing file include/linux/virtio_ring.h
00:06:54.435  Processing file include/spdk/mmio.h
00:06:54.435  Processing file include/spdk/util.h
00:06:54.435  Processing file include/spdk/base64.h
00:06:54.435  Processing file include/spdk/bdev_module.h
00:06:54.435  Processing file include/spdk/thread.h
00:06:54.435  Processing file include/spdk/trace.h
00:06:54.435  Processing file include/spdk/nvmf_transport.h
00:06:54.435  Processing file include/spdk/endian.h
00:06:54.435  Processing file include/spdk/nvme_spec.h
00:06:54.435  Processing file include/spdk/histogram_data.h
00:06:54.435  Processing file include/spdk/nvme.h
00:06:54.435  Processing file include/spdk_internal/virtio.h
00:06:54.435  Processing file include/spdk_internal/sgl.h
00:06:54.435  Processing file include/spdk_internal/sock.h
00:06:54.435  Processing file include/spdk_internal/nvme_tcp.h
00:06:54.435  Processing file include/spdk_internal/utf.h
00:06:54.435  Processing file include/spdk_internal/rdma.h
00:06:54.693  Processing file lib/accel/accel_sw.c
00:06:54.693  Processing file lib/accel/accel_rpc.c
00:06:54.693  Processing file lib/accel/accel.c
00:06:54.952  Processing file lib/bdev/scsi_nvme.c
00:06:54.952  Processing file lib/bdev/bdev_zone.c
00:06:54.952  Processing file lib/bdev/bdev_rpc.c
00:06:54.952  Processing file lib/bdev/part.c
00:06:54.952  Processing file lib/bdev/bdev.c
00:06:55.211  Processing file lib/blob/blobstore.h
00:06:55.211  Processing file lib/blob/zeroes.c
00:06:55.211  Processing file lib/blob/request.c
00:06:55.211  Processing file lib/blob/blob_bs_dev.c
00:06:55.211  Processing file lib/blob/blobstore.c
00:06:55.211  Processing file lib/blobfs/tree.c
00:06:55.211  Processing file lib/blobfs/blobfs.c
00:06:55.211  Processing file lib/conf/conf.c
00:06:55.211  Processing file lib/dma/dma.c
00:06:55.470  Processing file lib/env_dpdk/pci_ioat.c
00:06:55.470  Processing file lib/env_dpdk/pci_event.c
00:06:55.470  Processing file lib/env_dpdk/pci.c
00:06:55.470  Processing file lib/env_dpdk/pci_idxd.c
00:06:55.470  Processing file lib/env_dpdk/pci_vmd.c
00:06:55.470  Processing file lib/env_dpdk/pci_dpdk.c
00:06:55.470  Processing file lib/env_dpdk/sigbus_handler.c
00:06:55.470  Processing file lib/env_dpdk/pci_virtio.c
00:06:55.470  Processing file lib/env_dpdk/memory.c
00:06:55.470  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:06:55.470  Processing file lib/env_dpdk/init.c
00:06:55.470  Processing file lib/env_dpdk/threads.c
00:06:55.470  Processing file lib/env_dpdk/env.c
00:06:55.470  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:06:55.728  Processing file lib/event/app.c
00:06:55.728  Processing file lib/event/reactor.c
00:06:55.728  Processing file lib/event/app_rpc.c
00:06:55.728  Processing file lib/event/log_rpc.c
00:06:55.728  Processing file lib/event/scheduler_static.c
00:06:56.294  Processing file lib/ftl/ftl_rq.c
00:06:56.294  Processing file lib/ftl/ftl_band_ops.c
00:06:56.294  Processing file lib/ftl/ftl_core.h
00:06:56.294  Processing file lib/ftl/ftl_nv_cache.h
00:06:56.295  Processing file lib/ftl/ftl_p2l.c
00:06:56.295  Processing file lib/ftl/ftl_init.c
00:06:56.295  Processing file lib/ftl/ftl_sb.c
00:06:56.295  Processing file lib/ftl/ftl_l2p.c
00:06:56.295  Processing file lib/ftl/ftl_l2p_cache.c
00:06:56.295  Processing file lib/ftl/ftl_l2p_flat.c
00:06:56.295  Processing file lib/ftl/ftl_debug.c
00:06:56.295  Processing file lib/ftl/ftl_io.c
00:06:56.295  Processing file lib/ftl/ftl_layout.c
00:06:56.295  Processing file lib/ftl/ftl_reloc.c
00:06:56.295  Processing file lib/ftl/ftl_nv_cache_io.h
00:06:56.295  Processing file lib/ftl/ftl_trace.c
00:06:56.295  Processing file lib/ftl/ftl_io.h
00:06:56.295  Processing file lib/ftl/ftl_band.h
00:06:56.295  Processing file lib/ftl/ftl_core.c
00:06:56.295  Processing file lib/ftl/ftl_writer.h
00:06:56.295  Processing file lib/ftl/ftl_nv_cache.c
00:06:56.295  Processing file lib/ftl/ftl_debug.h
00:06:56.295  Processing file lib/ftl/ftl_writer.c
00:06:56.295  Processing file lib/ftl/ftl_band.c
00:06:56.295  Processing file lib/ftl/base/ftl_base_bdev.c
00:06:56.295  Processing file lib/ftl/base/ftl_base_dev.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_band.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_startup.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_upgrade.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_recovery.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_p2l.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_l2p.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_shutdown.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_md.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_bdev.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_misc.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_self_test.c
00:06:56.553  Processing file lib/ftl/mngt/ftl_mngt_ioch.c
00:06:56.553  Processing file lib/ftl/nvc/ftl_nvc_dev.c
00:06:56.553  Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c
00:06:56.553  Processing file lib/ftl/upgrade/ftl_layout_upgrade.c
00:06:56.553  Processing file lib/ftl/upgrade/ftl_sb_v3.c
00:06:56.553  Processing file lib/ftl/upgrade/ftl_sb_upgrade.c
00:06:56.553  Processing file lib/ftl/upgrade/ftl_sb_v5.c
00:06:56.812  Processing file lib/ftl/utils/ftl_mempool.c
00:06:56.812  Processing file lib/ftl/utils/ftl_conf.c
00:06:56.812  Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c
00:06:56.812  Processing file lib/ftl/utils/ftl_addr_utils.h
00:06:56.812  Processing file lib/ftl/utils/ftl_property.h
00:06:56.812  Processing file lib/ftl/utils/ftl_bitmap.c
00:06:56.812  Processing file lib/ftl/utils/ftl_md.c
00:06:56.812  Processing file lib/ftl/utils/ftl_property.c
00:06:56.812  Processing file lib/ftl/utils/ftl_df.h
00:06:56.812  Processing file lib/idxd/idxd_user.c
00:06:56.812  Processing file lib/idxd/idxd.c
00:06:56.812  Processing file lib/idxd/idxd_internal.h
00:06:57.071  Processing file lib/init/rpc.c
00:06:57.071  Processing file lib/init/json_config.c
00:06:57.071  Processing file lib/init/subsystem_rpc.c
00:06:57.071  Processing file lib/init/subsystem.c
00:06:57.071  Processing file lib/ioat/ioat.c
00:06:57.071  Processing file lib/ioat/ioat_internal.h
00:06:57.636  Processing file lib/iscsi/iscsi_rpc.c
00:06:57.636  Processing file lib/iscsi/iscsi.h
00:06:57.636  Processing file lib/iscsi/task.c
00:06:57.636  Processing file lib/iscsi/tgt_node.c
00:06:57.636  Processing file lib/iscsi/iscsi_subsystem.c
00:06:57.636  Processing file lib/iscsi/init_grp.c
00:06:57.636  Processing file lib/iscsi/task.h
00:06:57.636  Processing file lib/iscsi/md5.c
00:06:57.636  Processing file lib/iscsi/portal_grp.c
00:06:57.636  Processing file lib/iscsi/param.c
00:06:57.636  Processing file lib/iscsi/conn.c
00:06:57.636  Processing file lib/iscsi/iscsi.c
00:06:57.636  Processing file lib/json/json_write.c
00:06:57.636  Processing file lib/json/json_util.c
00:06:57.636  Processing file lib/json/json_parse.c
00:06:57.636  Processing file lib/jsonrpc/jsonrpc_server.c
00:06:57.636  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:06:57.636  Processing file lib/jsonrpc/jsonrpc_client.c
00:06:57.636  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:06:57.893  Processing file lib/log/log_flags.c
00:06:57.893  Processing file lib/log/log.c
00:06:57.893  Processing file lib/log/log_deprecated.c
00:06:57.893  Processing file lib/lvol/lvol.c
00:06:57.893  Processing file lib/nbd/nbd.c
00:06:57.893  Processing file lib/nbd/nbd_rpc.c
00:06:58.151  Processing file lib/notify/notify.c
00:06:58.151  Processing file lib/notify/notify_rpc.c
00:06:58.719  Processing file lib/nvme/nvme_cuse.c
00:06:58.719  Processing file lib/nvme/nvme_opal.c
00:06:58.719  Processing file lib/nvme/nvme_rdma.c
00:06:58.719  Processing file lib/nvme/nvme_poll_group.c
00:06:58.719  Processing file lib/nvme/nvme_fabric.c
00:06:58.719  Processing file lib/nvme/nvme.c
00:06:58.719  Processing file lib/nvme/nvme_vfio_user.c
00:06:58.719  Processing file lib/nvme/nvme_io_msg.c
00:06:58.719  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:06:58.719  Processing file lib/nvme/nvme_ns.c
00:06:58.719  Processing file lib/nvme/nvme_tcp.c
00:06:58.719  Processing file lib/nvme/nvme_transport.c
00:06:58.719  Processing file lib/nvme/nvme_quirks.c
00:06:58.719  Processing file lib/nvme/nvme_discovery.c
00:06:58.719  Processing file lib/nvme/nvme_pcie_common.c
00:06:58.719  Processing file lib/nvme/nvme_ns_cmd.c
00:06:58.719  Processing file lib/nvme/nvme_zns.c
00:06:58.719  Processing file lib/nvme/nvme_internal.h
00:06:58.719  Processing file lib/nvme/nvme_qpair.c
00:06:58.719  Processing file lib/nvme/nvme_ctrlr.c
00:06:58.719  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:06:58.719  Processing file lib/nvme/nvme_pcie_internal.h
00:06:58.719  Processing file lib/nvme/nvme_pcie.c
00:06:58.719  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:06:59.287  Processing file lib/nvmf/ctrlr_bdev.c
00:06:59.287  Processing file lib/nvmf/ctrlr.c
00:06:59.287  Processing file lib/nvmf/nvmf.c
00:06:59.287  Processing file lib/nvmf/tcp.c
00:06:59.287  Processing file lib/nvmf/subsystem.c
00:06:59.287  Processing file lib/nvmf/nvmf_rpc.c
00:06:59.287  Processing file lib/nvmf/nvmf_internal.h
00:06:59.287  Processing file lib/nvmf/rdma.c
00:06:59.287  Processing file lib/nvmf/transport.c
00:06:59.287  Processing file lib/nvmf/ctrlr_discovery.c
00:06:59.287  Processing file lib/rdma/common.c
00:06:59.287  Processing file lib/rdma/rdma_verbs.c
00:06:59.287  Processing file lib/rpc/rpc.c
00:06:59.546  Processing file lib/scsi/scsi_rpc.c
00:06:59.546  Processing file lib/scsi/scsi.c
00:06:59.546  Processing file lib/scsi/dev.c
00:06:59.546  Processing file lib/scsi/task.c
00:06:59.546  Processing file lib/scsi/scsi_bdev.c
00:06:59.546  Processing file lib/scsi/scsi_pr.c
00:06:59.546  Processing file lib/scsi/lun.c
00:06:59.546  Processing file lib/scsi/port.c
00:06:59.546  Processing file lib/sock/sock.c
00:06:59.546  Processing file lib/sock/sock_rpc.c
00:06:59.804  Processing file lib/thread/iobuf.c
00:06:59.805  Processing file lib/thread/thread.c
00:06:59.805  Processing file lib/trace/trace.c
00:06:59.805  Processing file lib/trace/trace_flags.c
00:06:59.805  Processing file lib/trace/trace_rpc.c
00:06:59.805  Processing file lib/trace_parser/trace.cpp
00:07:00.064  Processing file lib/ut/ut.c
00:07:00.064  Processing file lib/ut_mock/mock.c
00:07:00.323  Processing file lib/util/dif.c
00:07:00.323  Processing file lib/util/bit_array.c
00:07:00.323  Processing file lib/util/strerror_tls.c
00:07:00.323  Processing file lib/util/fd_group.c
00:07:00.323  Processing file lib/util/crc32c.c
00:07:00.323  Processing file lib/util/crc16.c
00:07:00.323  Processing file lib/util/iov.c
00:07:00.323  Processing file lib/util/fd.c
00:07:00.323  Processing file lib/util/crc32_ieee.c
00:07:00.323  Processing file lib/util/pipe.c
00:07:00.323  Processing file lib/util/crc64.c
00:07:00.323  Processing file lib/util/uuid.c
00:07:00.323  Processing file lib/util/cpuset.c
00:07:00.323  Processing file lib/util/crc32.c
00:07:00.323  Processing file lib/util/xor.c
00:07:00.323  Processing file lib/util/zipf.c
00:07:00.323  Processing file lib/util/hexlify.c
00:07:00.323  Processing file lib/util/string.c
00:07:00.323  Processing file lib/util/file.c
00:07:00.323  Processing file lib/util/base64.c
00:07:00.323  Processing file lib/util/math.c
00:07:00.323  Processing file lib/vfio_user/host/vfio_user.c
00:07:00.323  Processing file lib/vfio_user/host/vfio_user_pci.c
00:07:00.582  Processing file lib/vhost/vhost.c
00:07:00.582  Processing file lib/vhost/vhost_rpc.c
00:07:00.582  Processing file lib/vhost/vhost_internal.h
00:07:00.582  Processing file lib/vhost/rte_vhost_user.c
00:07:00.582  Processing file lib/vhost/vhost_blk.c
00:07:00.582  Processing file lib/vhost/vhost_scsi.c
00:07:00.840  Processing file lib/virtio/virtio.c
00:07:00.840  Processing file lib/virtio/virtio_pci.c
00:07:00.841  Processing file lib/virtio/virtio_vfio_user.c
00:07:00.841  Processing file lib/virtio/virtio_vhost_user.c
00:07:00.841  Processing file lib/vmd/vmd.c
00:07:00.841  Processing file lib/vmd/led.c
00:07:00.841  Processing file module/accel/dsa/accel_dsa_rpc.c
00:07:00.841  Processing file module/accel/dsa/accel_dsa.c
00:07:01.099  Processing file module/accel/error/accel_error.c
00:07:01.099  Processing file module/accel/error/accel_error_rpc.c
00:07:01.099  Processing file module/accel/iaa/accel_iaa.c
00:07:01.099  Processing file module/accel/iaa/accel_iaa_rpc.c
00:07:01.099  Processing file module/accel/ioat/accel_ioat_rpc.c
00:07:01.099  Processing file module/accel/ioat/accel_ioat.c
00:07:01.099  Processing file module/bdev/aio/bdev_aio_rpc.c
00:07:01.099  Processing file module/bdev/aio/bdev_aio.c
00:07:01.358  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:07:01.358  Processing file module/bdev/delay/vbdev_delay.c
00:07:01.358  Processing file module/bdev/error/vbdev_error_rpc.c
00:07:01.358  Processing file module/bdev/error/vbdev_error.c
00:07:01.358  Processing file module/bdev/ftl/bdev_ftl_rpc.c
00:07:01.358  Processing file module/bdev/ftl/bdev_ftl.c
00:07:01.617  Processing file module/bdev/gpt/vbdev_gpt.c
00:07:01.617  Processing file module/bdev/gpt/gpt.c
00:07:01.617  Processing file module/bdev/gpt/gpt.h
00:07:01.617  Processing file module/bdev/iscsi/bdev_iscsi.c
00:07:01.617  Processing file module/bdev/iscsi/bdev_iscsi_rpc.c
00:07:01.617  Processing file module/bdev/lvol/vbdev_lvol.c
00:07:01.617  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:07:01.876  Processing file module/bdev/malloc/bdev_malloc.c
00:07:01.876  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:07:01.876  Processing file module/bdev/null/bdev_null_rpc.c
00:07:01.876  Processing file module/bdev/null/bdev_null.c
00:07:02.135  Processing file module/bdev/nvme/vbdev_opal_rpc.c
00:07:02.135  Processing file module/bdev/nvme/bdev_nvme.c
00:07:02.135  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:07:02.135  Processing file module/bdev/nvme/vbdev_opal.c
00:07:02.135  Processing file module/bdev/nvme/bdev_mdns_client.c
00:07:02.135  Processing file module/bdev/nvme/nvme_rpc.c
00:07:02.135  Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c
00:07:02.393  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:07:02.393  Processing file module/bdev/passthru/vbdev_passthru.c
00:07:02.652  Processing file module/bdev/raid/bdev_raid_sb.c
00:07:02.652  Processing file module/bdev/raid/raid1.c
00:07:02.652  Processing file module/bdev/raid/bdev_raid.h
00:07:02.652  Processing file module/bdev/raid/raid0.c
00:07:02.652  Processing file module/bdev/raid/bdev_raid.c
00:07:02.652  Processing file module/bdev/raid/raid5f.c
00:07:02.652  Processing file module/bdev/raid/bdev_raid_rpc.c
00:07:02.652  Processing file module/bdev/raid/concat.c
00:07:02.652  Processing file module/bdev/split/vbdev_split_rpc.c
00:07:02.652  Processing file module/bdev/split/vbdev_split.c
00:07:02.910  Processing file module/bdev/virtio/bdev_virtio_rpc.c
00:07:02.910  Processing file module/bdev/virtio/bdev_virtio_scsi.c
00:07:02.910  Processing file module/bdev/virtio/bdev_virtio_blk.c
00:07:02.910  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:07:02.910  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:07:02.910  Processing file module/blob/bdev/blob_bdev.c
00:07:02.910  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:07:02.910  Processing file module/blobfs/bdev/blobfs_bdev.c
00:07:03.169  Processing file module/env_dpdk/env_dpdk_rpc.c
00:07:03.169  Processing file module/event/subsystems/accel/accel.c
00:07:03.169  Processing file module/event/subsystems/bdev/bdev.c
00:07:03.169  Processing file module/event/subsystems/iobuf/iobuf.c
00:07:03.169  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:07:03.428  Processing file module/event/subsystems/iscsi/iscsi.c
00:07:03.428  Processing file module/event/subsystems/nbd/nbd.c
00:07:03.428  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:07:03.428  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:07:03.428  Processing file module/event/subsystems/scheduler/scheduler.c
00:07:03.686  Processing file module/event/subsystems/scsi/scsi.c
00:07:03.686  Processing file module/event/subsystems/sock/sock.c
00:07:03.686  Processing file module/event/subsystems/vhost_blk/vhost_blk.c
00:07:03.686  Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c
00:07:03.945  Processing file module/event/subsystems/vmd/vmd.c
00:07:03.945  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:07:03.945  Processing file module/scheduler/dpdk_governor/dpdk_governor.c
00:07:03.945  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:07:04.203  Processing file module/scheduler/gscheduler/gscheduler.c
00:07:04.203  Processing file module/sock/sock_kernel.h
00:07:04.203  Processing file module/sock/posix/posix.c
00:07:04.203  Writing directory view page.
00:07:04.203  Overall coverage rate:
00:07:04.203    lines......: 39.1% (39266 of 100435 lines)
00:07:04.203    functions..: 42.8% (3587 of 8384 functions)
00:07:04.203  
00:07:04.203  
00:07:04.203  =====================
00:07:04.203  All unit tests passed
00:07:04.203  =====================
00:07:04.203   23:40:34	-- unit/unittest.sh@277 -- # set +x
00:07:04.203  WARN: lcov not installed or SPDK built without coverage!
00:07:04.203  
00:07:04.203  
00:07:04.203  
00:07:04.203  real	2m45.640s
00:07:04.203  user	2m21.462s
00:07:04.203  sys	0m14.612s
00:07:04.203   23:40:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:04.203   23:40:34	-- common/autotest_common.sh@10 -- # set +x
00:07:04.203  ************************************
00:07:04.203  END TEST unittest
00:07:04.203  ************************************
00:07:04.203   23:40:34	-- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']'
00:07:04.203   23:40:34	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:07:04.203   23:40:34	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:07:04.203   23:40:34	-- spdk/autotest.sh@160 -- # timing_enter lib
00:07:04.203   23:40:34	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:04.203   23:40:34	-- common/autotest_common.sh@10 -- # set +x
00:07:04.203   23:40:34	-- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:07:04.203   23:40:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:04.203   23:40:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:04.203   23:40:34	-- common/autotest_common.sh@10 -- # set +x
00:07:04.462  ************************************
00:07:04.462  START TEST env
00:07:04.462  ************************************
00:07:04.462   23:40:34	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:07:04.462  * Looking for test storage...
00:07:04.462  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:07:04.462    23:40:35	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:04.462     23:40:35	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:04.462     23:40:35	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:04.462    23:40:35	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:04.462    23:40:35	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:04.462    23:40:35	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:04.462    23:40:35	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:04.462    23:40:35	-- scripts/common.sh@335 -- # IFS=.-:
00:07:04.462    23:40:35	-- scripts/common.sh@335 -- # read -ra ver1
00:07:04.462    23:40:35	-- scripts/common.sh@336 -- # IFS=.-:
00:07:04.462    23:40:35	-- scripts/common.sh@336 -- # read -ra ver2
00:07:04.462    23:40:35	-- scripts/common.sh@337 -- # local 'op=<'
00:07:04.462    23:40:35	-- scripts/common.sh@339 -- # ver1_l=2
00:07:04.462    23:40:35	-- scripts/common.sh@340 -- # ver2_l=1
00:07:04.462    23:40:35	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:04.462    23:40:35	-- scripts/common.sh@343 -- # case "$op" in
00:07:04.462    23:40:35	-- scripts/common.sh@344 -- # : 1
00:07:04.462    23:40:35	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:04.462    23:40:35	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.462     23:40:35	-- scripts/common.sh@364 -- # decimal 1
00:07:04.462     23:40:35	-- scripts/common.sh@352 -- # local d=1
00:07:04.462     23:40:35	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.462     23:40:35	-- scripts/common.sh@354 -- # echo 1
00:07:04.463    23:40:35	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:04.463     23:40:35	-- scripts/common.sh@365 -- # decimal 2
00:07:04.463     23:40:35	-- scripts/common.sh@352 -- # local d=2
00:07:04.463     23:40:35	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.463     23:40:35	-- scripts/common.sh@354 -- # echo 2
00:07:04.463    23:40:35	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:04.463    23:40:35	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:04.463    23:40:35	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:04.463    23:40:35	-- scripts/common.sh@367 -- # return 0
00:07:04.463    23:40:35	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.463    23:40:35	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:04.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.463  		--rc genhtml_branch_coverage=1
00:07:04.463  		--rc genhtml_function_coverage=1
00:07:04.463  		--rc genhtml_legend=1
00:07:04.463  		--rc geninfo_all_blocks=1
00:07:04.463  		--rc geninfo_unexecuted_blocks=1
00:07:04.463  		
00:07:04.463  		'
00:07:04.463    23:40:35	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:04.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.463  		--rc genhtml_branch_coverage=1
00:07:04.463  		--rc genhtml_function_coverage=1
00:07:04.463  		--rc genhtml_legend=1
00:07:04.463  		--rc geninfo_all_blocks=1
00:07:04.463  		--rc geninfo_unexecuted_blocks=1
00:07:04.463  		
00:07:04.463  		'
00:07:04.463    23:40:35	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:04.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.463  		--rc genhtml_branch_coverage=1
00:07:04.463  		--rc genhtml_function_coverage=1
00:07:04.463  		--rc genhtml_legend=1
00:07:04.463  		--rc geninfo_all_blocks=1
00:07:04.463  		--rc geninfo_unexecuted_blocks=1
00:07:04.463  		
00:07:04.463  		'
00:07:04.463    23:40:35	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:04.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.463  		--rc genhtml_branch_coverage=1
00:07:04.463  		--rc genhtml_function_coverage=1
00:07:04.463  		--rc genhtml_legend=1
00:07:04.463  		--rc geninfo_all_blocks=1
00:07:04.463  		--rc geninfo_unexecuted_blocks=1
00:07:04.463  		
00:07:04.463  		'
00:07:04.463   23:40:35	-- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:07:04.463   23:40:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:04.463   23:40:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:04.463   23:40:35	-- common/autotest_common.sh@10 -- # set +x
00:07:04.463  ************************************
00:07:04.463  START TEST env_memory
00:07:04.463  ************************************
00:07:04.463   23:40:35	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:07:04.463  
00:07:04.463  
00:07:04.463       CUnit - A unit testing framework for C - Version 2.1-3
00:07:04.463       http://cunit.sourceforge.net/
00:07:04.463  
00:07:04.463  
00:07:04.463  Suite: memory
00:07:04.463    Test: alloc and free memory map ...[2024-12-13 23:40:35.187174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:07:04.722  passed
00:07:04.722    Test: mem map translation ...[2024-12-13 23:40:35.236199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:07:04.722  [2024-12-13 23:40:35.236335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:07:04.722  [2024-12-13 23:40:35.236462] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:07:04.722  [2024-12-13 23:40:35.236550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:07:04.722  passed
00:07:04.722    Test: mem map registration ...[2024-12-13 23:40:35.322590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234
00:07:04.722  [2024-12-13 23:40:35.322724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152
00:07:04.722  passed
00:07:04.722    Test: mem map adjacent registrations ...passed
00:07:04.722  
00:07:04.722  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:04.722                suites      1      1    n/a      0        0
00:07:04.722                 tests      4      4      4      0        0
00:07:04.722               asserts    152    152    152      0      n/a
00:07:04.722  
00:07:04.722  Elapsed time =    0.296 seconds
00:07:04.981  
00:07:04.981  real	0m0.330s
00:07:04.981  user	0m0.318s
00:07:04.981  sys	0m0.012s
00:07:04.981   23:40:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:04.981   23:40:35	-- common/autotest_common.sh@10 -- # set +x
00:07:04.981  ************************************
00:07:04.981  END TEST env_memory
00:07:04.981  ************************************
00:07:04.981   23:40:35	-- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:07:04.981   23:40:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:04.981   23:40:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:04.981   23:40:35	-- common/autotest_common.sh@10 -- # set +x
00:07:04.981  ************************************
00:07:04.981  START TEST env_vtophys
00:07:04.981  ************************************
00:07:04.981   23:40:35	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:07:04.981  EAL: lib.eal log level changed from notice to debug
00:07:04.981  EAL: Detected lcore 0 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 1 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 2 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 3 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 4 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 5 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 6 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 7 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 8 as core 0 on socket 0
00:07:04.981  EAL: Detected lcore 9 as core 0 on socket 0
00:07:04.981  EAL: Maximum logical cores by configuration: 128
00:07:04.981  EAL: Detected CPU lcores: 10
00:07:04.981  EAL: Detected NUMA nodes: 1
00:07:04.981  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:07:04.981  EAL: Checking presence of .so 'librte_eal.so.24'
00:07:04.981  EAL: Checking presence of .so 'librte_eal.so'
00:07:04.981  EAL: Detected static linkage of DPDK
00:07:04.981  EAL: No shared files mode enabled, IPC will be disabled
00:07:04.981  EAL: Selected IOVA mode 'PA'
00:07:04.981  EAL: Probing VFIO support...
00:07:04.981  EAL: IOMMU type 1 (Type 1) is supported
00:07:04.981  EAL: IOMMU type 7 (sPAPR) is not supported
00:07:04.981  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:07:04.981  EAL: VFIO support initialized
00:07:04.981  EAL: Ask a virtual area of 0x2e000 bytes
00:07:04.981  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:07:04.981  EAL: Setting up physically contiguous memory...
00:07:04.981  EAL: Setting maximum number of open files to 1048576
00:07:04.981  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:07:04.981  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:07:04.981  EAL: Ask a virtual area of 0x61000 bytes
00:07:04.981  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:07:04.981  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:07:04.981  EAL: Ask a virtual area of 0x400000000 bytes
00:07:04.981  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:07:04.981  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:07:04.981  EAL: Ask a virtual area of 0x61000 bytes
00:07:04.981  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:07:04.981  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:07:04.981  EAL: Ask a virtual area of 0x400000000 bytes
00:07:04.981  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:07:04.981  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:07:04.981  EAL: Ask a virtual area of 0x61000 bytes
00:07:04.981  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:07:04.981  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:07:04.981  EAL: Ask a virtual area of 0x400000000 bytes
00:07:04.981  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:07:04.981  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:07:04.981  EAL: Ask a virtual area of 0x61000 bytes
00:07:04.981  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:07:04.981  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:07:04.981  EAL: Ask a virtual area of 0x400000000 bytes
00:07:04.981  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:07:04.981  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:07:04.981  EAL: Hugepages will be freed exactly as allocated.
00:07:04.981  EAL: No shared files mode enabled, IPC is disabled
00:07:04.981  EAL: No shared files mode enabled, IPC is disabled
00:07:04.981  EAL: TSC frequency is ~2200000 KHz
00:07:04.981  EAL: Main lcore 0 is ready (tid=7f23d8180a80;cpuset=[0])
00:07:04.981  EAL: Trying to obtain current memory policy.
00:07:04.981  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:04.981  EAL: Restoring previous memory policy: 0
00:07:04.981  EAL: request: mp_malloc_sync
00:07:04.981  EAL: No shared files mode enabled, IPC is disabled
00:07:04.981  EAL: Heap on socket 0 was expanded by 2MB
00:07:04.981  EAL: No shared files mode enabled, IPC is disabled
00:07:05.241  EAL: Mem event callback 'spdk:(nil)' registered
00:07:05.241  
00:07:05.241  
00:07:05.241       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.241       http://cunit.sourceforge.net/
00:07:05.241  
00:07:05.241  
00:07:05.241  Suite: components_suite
00:07:05.498    Test: vtophys_malloc_test ...passed
00:07:05.498    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:07:05.498  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.498  EAL: Restoring previous memory policy: 0
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was expanded by 4MB
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was shrunk by 4MB
00:07:05.498  EAL: Trying to obtain current memory policy.
00:07:05.498  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.498  EAL: Restoring previous memory policy: 0
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was expanded by 6MB
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was shrunk by 6MB
00:07:05.498  EAL: Trying to obtain current memory policy.
00:07:05.498  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.498  EAL: Restoring previous memory policy: 0
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was expanded by 10MB
00:07:05.498  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.498  EAL: request: mp_malloc_sync
00:07:05.498  EAL: No shared files mode enabled, IPC is disabled
00:07:05.498  EAL: Heap on socket 0 was shrunk by 10MB
00:07:05.754  EAL: Trying to obtain current memory policy.
00:07:05.754  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.754  EAL: Restoring previous memory policy: 0
00:07:05.754  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.755  EAL: request: mp_malloc_sync
00:07:05.755  EAL: No shared files mode enabled, IPC is disabled
00:07:05.755  EAL: Heap on socket 0 was expanded by 18MB
00:07:05.755  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.755  EAL: request: mp_malloc_sync
00:07:05.755  EAL: No shared files mode enabled, IPC is disabled
00:07:05.755  EAL: Heap on socket 0 was shrunk by 18MB
00:07:05.755  EAL: Trying to obtain current memory policy.
00:07:05.755  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.755  EAL: Restoring previous memory policy: 0
00:07:05.755  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.755  EAL: request: mp_malloc_sync
00:07:05.755  EAL: No shared files mode enabled, IPC is disabled
00:07:05.755  EAL: Heap on socket 0 was expanded by 34MB
00:07:05.755  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.755  EAL: request: mp_malloc_sync
00:07:05.755  EAL: No shared files mode enabled, IPC is disabled
00:07:05.755  EAL: Heap on socket 0 was shrunk by 34MB
00:07:05.755  EAL: Trying to obtain current memory policy.
00:07:05.755  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:05.755  EAL: Restoring previous memory policy: 0
00:07:05.755  EAL: Calling mem event callback 'spdk:(nil)'
00:07:05.755  EAL: request: mp_malloc_sync
00:07:05.755  EAL: No shared files mode enabled, IPC is disabled
00:07:05.755  EAL: Heap on socket 0 was expanded by 66MB
00:07:06.012  EAL: Calling mem event callback 'spdk:(nil)'
00:07:06.012  EAL: request: mp_malloc_sync
00:07:06.012  EAL: No shared files mode enabled, IPC is disabled
00:07:06.012  EAL: Heap on socket 0 was shrunk by 66MB
00:07:06.012  EAL: Trying to obtain current memory policy.
00:07:06.012  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:06.012  EAL: Restoring previous memory policy: 0
00:07:06.012  EAL: Calling mem event callback 'spdk:(nil)'
00:07:06.012  EAL: request: mp_malloc_sync
00:07:06.012  EAL: No shared files mode enabled, IPC is disabled
00:07:06.012  EAL: Heap on socket 0 was expanded by 130MB
00:07:06.269  EAL: Calling mem event callback 'spdk:(nil)'
00:07:06.269  EAL: request: mp_malloc_sync
00:07:06.269  EAL: No shared files mode enabled, IPC is disabled
00:07:06.269  EAL: Heap on socket 0 was shrunk by 130MB
00:07:06.527  EAL: Trying to obtain current memory policy.
00:07:06.527  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:06.527  EAL: Restoring previous memory policy: 0
00:07:06.527  EAL: Calling mem event callback 'spdk:(nil)'
00:07:06.527  EAL: request: mp_malloc_sync
00:07:06.527  EAL: No shared files mode enabled, IPC is disabled
00:07:06.527  EAL: Heap on socket 0 was expanded by 258MB
00:07:07.094  EAL: Calling mem event callback 'spdk:(nil)'
00:07:07.094  EAL: request: mp_malloc_sync
00:07:07.094  EAL: No shared files mode enabled, IPC is disabled
00:07:07.094  EAL: Heap on socket 0 was shrunk by 258MB
00:07:07.350  EAL: Trying to obtain current memory policy.
00:07:07.350  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:07.608  EAL: Restoring previous memory policy: 0
00:07:07.608  EAL: Calling mem event callback 'spdk:(nil)'
00:07:07.608  EAL: request: mp_malloc_sync
00:07:07.608  EAL: No shared files mode enabled, IPC is disabled
00:07:07.608  EAL: Heap on socket 0 was expanded by 514MB
00:07:08.174  EAL: Calling mem event callback 'spdk:(nil)'
00:07:08.434  EAL: request: mp_malloc_sync
00:07:08.434  EAL: No shared files mode enabled, IPC is disabled
00:07:08.434  EAL: Heap on socket 0 was shrunk by 514MB
00:07:09.000  EAL: Trying to obtain current memory policy.
00:07:09.000  EAL: Setting policy MPOL_PREFERRED for socket 0
00:07:09.258  EAL: Restoring previous memory policy: 0
00:07:09.258  EAL: Calling mem event callback 'spdk:(nil)'
00:07:09.258  EAL: request: mp_malloc_sync
00:07:09.258  EAL: No shared files mode enabled, IPC is disabled
00:07:09.258  EAL: Heap on socket 0 was expanded by 1026MB
00:07:10.632  EAL: Calling mem event callback 'spdk:(nil)'
00:07:11.199  EAL: request: mp_malloc_sync
00:07:11.199  EAL: No shared files mode enabled, IPC is disabled
00:07:11.199  EAL: Heap on socket 0 was shrunk by 1026MB
00:07:12.134  passed
00:07:12.134  
00:07:12.134  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:12.134                suites      1      1    n/a      0        0
00:07:12.134                 tests      2      2      2      0        0
00:07:12.134               asserts   6363   6363   6363      0      n/a
00:07:12.134  
00:07:12.134  Elapsed time =    6.964 seconds
00:07:12.134  EAL: Calling mem event callback 'spdk:(nil)'
00:07:12.134  EAL: request: mp_malloc_sync
00:07:12.134  EAL: No shared files mode enabled, IPC is disabled
00:07:12.134  EAL: Heap on socket 0 was shrunk by 2MB
00:07:12.134  EAL: No shared files mode enabled, IPC is disabled
00:07:12.134  EAL: No shared files mode enabled, IPC is disabled
00:07:12.134  EAL: No shared files mode enabled, IPC is disabled
00:07:12.134  
00:07:12.134  real	0m7.262s
00:07:12.134  user	0m5.983s
00:07:12.134  sys	0m1.145s
00:07:12.134   23:40:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.134   23:40:42	-- common/autotest_common.sh@10 -- # set +x
00:07:12.134  ************************************
00:07:12.134  END TEST env_vtophys
00:07:12.134  ************************************
00:07:12.134   23:40:42	-- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:07:12.134   23:40:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:12.134   23:40:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:12.134   23:40:42	-- common/autotest_common.sh@10 -- # set +x
00:07:12.134  ************************************
00:07:12.134  START TEST env_pci
00:07:12.134  ************************************
00:07:12.134   23:40:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:07:12.134  
00:07:12.134  
00:07:12.134       CUnit - A unit testing framework for C - Version 2.1-3
00:07:12.134       http://cunit.sourceforge.net/
00:07:12.134  
00:07:12.134  
00:07:12.134  Suite: pci
00:07:12.134    Test: pci_hook ...[2024-12-13 23:40:42.860945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102683 has claimed it
00:07:12.393  passed
00:07:12.393  
00:07:12.393  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:12.393                suites      1      1    n/a      0        0
00:07:12.393                 tests      1      1      1      0        0
00:07:12.393               asserts     25     25     25      0      n/a
00:07:12.393  
00:07:12.393  Elapsed time =    0.005 secondsEAL: Cannot find device (10000:00:01.0)
00:07:12.393  EAL: Failed to attach device on primary process
00:07:12.393  
00:07:12.393  
00:07:12.393  real	0m0.087s
00:07:12.393  user	0m0.039s
00:07:12.393  sys	0m0.049s
00:07:12.393   23:40:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.393   23:40:42	-- common/autotest_common.sh@10 -- # set +x
00:07:12.393  ************************************
00:07:12.393  END TEST env_pci
00:07:12.393  ************************************
00:07:12.393   23:40:42	-- env/env.sh@14 -- # argv='-c 0x1 '
00:07:12.393    23:40:42	-- env/env.sh@15 -- # uname
00:07:12.393   23:40:42	-- env/env.sh@15 -- # '[' Linux = Linux ']'
00:07:12.393   23:40:42	-- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:07:12.393   23:40:42	-- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:07:12.393   23:40:42	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:07:12.393   23:40:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:12.393   23:40:42	-- common/autotest_common.sh@10 -- # set +x
00:07:12.393  ************************************
00:07:12.393  START TEST env_dpdk_post_init
00:07:12.393  ************************************
00:07:12.393   23:40:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:07:12.393  EAL: Detected CPU lcores: 10
00:07:12.393  EAL: Detected NUMA nodes: 1
00:07:12.393  EAL: Detected static linkage of DPDK
00:07:12.393  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:07:12.393  EAL: Selected IOVA mode 'PA'
00:07:12.393  EAL: VFIO support initialized
00:07:12.651  TELEMETRY: No legacy callbacks, legacy socket not created
00:07:12.651  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1)
00:07:12.651  Starting DPDK initialization...
00:07:12.651  Starting SPDK post initialization...
00:07:12.651  SPDK NVMe probe
00:07:12.651  Attaching to 0000:00:06.0
00:07:12.651  Attached to 0000:00:06.0
00:07:12.651  Cleaning up...
00:07:12.651  
00:07:12.651  real	0m0.245s
00:07:12.651  user	0m0.063s
00:07:12.651  sys	0m0.084s
00:07:12.651   23:40:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.651  ************************************
00:07:12.651  END TEST env_dpdk_post_init
00:07:12.651  ************************************
00:07:12.651   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:12.651    23:40:43	-- env/env.sh@26 -- # uname
00:07:12.651   23:40:43	-- env/env.sh@26 -- # '[' Linux = Linux ']'
00:07:12.651   23:40:43	-- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:07:12.651   23:40:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:12.651   23:40:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:12.651   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:12.651  ************************************
00:07:12.651  START TEST env_mem_callbacks
00:07:12.651  ************************************
00:07:12.651   23:40:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:07:12.651  EAL: Detected CPU lcores: 10
00:07:12.651  EAL: Detected NUMA nodes: 1
00:07:12.651  EAL: Detected static linkage of DPDK
00:07:12.651  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:07:12.651  EAL: Selected IOVA mode 'PA'
00:07:12.651  EAL: VFIO support initialized
00:07:12.910  TELEMETRY: No legacy callbacks, legacy socket not created
00:07:12.910  
00:07:12.910  
00:07:12.910       CUnit - A unit testing framework for C - Version 2.1-3
00:07:12.910       http://cunit.sourceforge.net/
00:07:12.910  
00:07:12.910  
00:07:12.910  Suite: memory
00:07:12.910    Test: test ...
00:07:12.910  register 0x200000200000 2097152
00:07:12.910  malloc 3145728
00:07:12.910  register 0x200000400000 4194304
00:07:12.910  buf 0x2000004fffc0 len 3145728 PASSED
00:07:12.910  malloc 64
00:07:12.910  buf 0x2000004ffec0 len 64 PASSED
00:07:12.910  malloc 4194304
00:07:12.910  register 0x200000800000 6291456
00:07:12.910  buf 0x2000009fffc0 len 4194304 PASSED
00:07:12.910  free 0x2000004fffc0 3145728
00:07:12.910  free 0x2000004ffec0 64
00:07:12.910  unregister 0x200000400000 4194304 PASSED
00:07:12.910  free 0x2000009fffc0 4194304
00:07:12.910  unregister 0x200000800000 6291456 PASSED
00:07:12.910  malloc 8388608
00:07:12.910  register 0x200000400000 10485760
00:07:12.910  buf 0x2000005fffc0 len 8388608 PASSED
00:07:12.910  free 0x2000005fffc0 8388608
00:07:12.910  unregister 0x200000400000 10485760 PASSED
00:07:12.910  passed
00:07:12.910  
00:07:12.910  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:12.910                suites      1      1    n/a      0        0
00:07:12.910                 tests      1      1      1      0        0
00:07:12.910               asserts     15     15     15      0      n/a
00:07:12.910  
00:07:12.910  Elapsed time =    0.047 seconds
00:07:12.910  ************************************
00:07:12.911  END TEST env_mem_callbacks
00:07:12.911  ************************************
00:07:12.911  
00:07:12.911  real	0m0.274s
00:07:12.911  user	0m0.094s
00:07:12.911  sys	0m0.077s
00:07:12.911   23:40:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.911   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:12.911  ************************************
00:07:12.911  END TEST env
00:07:12.911  ************************************
00:07:12.911  
00:07:12.911  real	0m8.635s
00:07:12.911  user	0m6.767s
00:07:12.911  sys	0m1.529s
00:07:12.911   23:40:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.911   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:12.911   23:40:43	-- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:07:12.911   23:40:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:12.911   23:40:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:12.911   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:12.911  ************************************
00:07:12.911  START TEST rpc
00:07:12.911  ************************************
00:07:12.911   23:40:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:07:13.170  * Looking for test storage...
00:07:13.170  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:07:13.170    23:40:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:13.170     23:40:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:13.170     23:40:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:13.170    23:40:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:13.170    23:40:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:13.170    23:40:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:13.170    23:40:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:13.170    23:40:43	-- scripts/common.sh@335 -- # IFS=.-:
00:07:13.170    23:40:43	-- scripts/common.sh@335 -- # read -ra ver1
00:07:13.170    23:40:43	-- scripts/common.sh@336 -- # IFS=.-:
00:07:13.170    23:40:43	-- scripts/common.sh@336 -- # read -ra ver2
00:07:13.170    23:40:43	-- scripts/common.sh@337 -- # local 'op=<'
00:07:13.170    23:40:43	-- scripts/common.sh@339 -- # ver1_l=2
00:07:13.170    23:40:43	-- scripts/common.sh@340 -- # ver2_l=1
00:07:13.170    23:40:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:13.170    23:40:43	-- scripts/common.sh@343 -- # case "$op" in
00:07:13.170    23:40:43	-- scripts/common.sh@344 -- # : 1
00:07:13.170    23:40:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:13.170    23:40:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:13.170     23:40:43	-- scripts/common.sh@364 -- # decimal 1
00:07:13.170     23:40:43	-- scripts/common.sh@352 -- # local d=1
00:07:13.170     23:40:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:13.170     23:40:43	-- scripts/common.sh@354 -- # echo 1
00:07:13.170    23:40:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:13.170     23:40:43	-- scripts/common.sh@365 -- # decimal 2
00:07:13.170     23:40:43	-- scripts/common.sh@352 -- # local d=2
00:07:13.170     23:40:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:13.170     23:40:43	-- scripts/common.sh@354 -- # echo 2
00:07:13.170    23:40:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:13.170    23:40:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:13.170    23:40:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:13.170    23:40:43	-- scripts/common.sh@367 -- # return 0
00:07:13.170    23:40:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:13.170    23:40:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:13.170  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.170  		--rc genhtml_branch_coverage=1
00:07:13.170  		--rc genhtml_function_coverage=1
00:07:13.170  		--rc genhtml_legend=1
00:07:13.170  		--rc geninfo_all_blocks=1
00:07:13.170  		--rc geninfo_unexecuted_blocks=1
00:07:13.170  		
00:07:13.170  		'
00:07:13.170    23:40:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:13.170  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.170  		--rc genhtml_branch_coverage=1
00:07:13.170  		--rc genhtml_function_coverage=1
00:07:13.170  		--rc genhtml_legend=1
00:07:13.170  		--rc geninfo_all_blocks=1
00:07:13.170  		--rc geninfo_unexecuted_blocks=1
00:07:13.170  		
00:07:13.170  		'
00:07:13.170    23:40:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:13.170  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.170  		--rc genhtml_branch_coverage=1
00:07:13.170  		--rc genhtml_function_coverage=1
00:07:13.170  		--rc genhtml_legend=1
00:07:13.170  		--rc geninfo_all_blocks=1
00:07:13.170  		--rc geninfo_unexecuted_blocks=1
00:07:13.170  		
00:07:13.170  		'
00:07:13.170    23:40:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:13.170  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.170  		--rc genhtml_branch_coverage=1
00:07:13.170  		--rc genhtml_function_coverage=1
00:07:13.170  		--rc genhtml_legend=1
00:07:13.170  		--rc geninfo_all_blocks=1
00:07:13.170  		--rc geninfo_unexecuted_blocks=1
00:07:13.170  		
00:07:13.170  		'
00:07:13.170   23:40:43	-- rpc/rpc.sh@65 -- # spdk_pid=102821
00:07:13.170   23:40:43	-- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:07:13.170   23:40:43	-- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:13.170   23:40:43	-- rpc/rpc.sh@67 -- # waitforlisten 102821
00:07:13.170   23:40:43	-- common/autotest_common.sh@829 -- # '[' -z 102821 ']'
00:07:13.170   23:40:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:13.170   23:40:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:13.170   23:40:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:13.170  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:13.170   23:40:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:13.170   23:40:43	-- common/autotest_common.sh@10 -- # set +x
00:07:13.170  [2024-12-13 23:40:43.901756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:13.170  [2024-12-13 23:40:43.902240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102821 ]
00:07:13.430  [2024-12-13 23:40:44.070797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:13.688  [2024-12-13 23:40:44.271504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:13.688  [2024-12-13 23:40:44.272036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:07:13.688  [2024-12-13 23:40:44.272189] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102821' to capture a snapshot of events at runtime.
00:07:13.688  [2024-12-13 23:40:44.272305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102821 for offline analysis/debug.
00:07:13.688  [2024-12-13 23:40:44.272430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:15.067   23:40:45	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:15.067   23:40:45	-- common/autotest_common.sh@862 -- # return 0
00:07:15.067   23:40:45	-- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:07:15.067   23:40:45	-- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:07:15.067   23:40:45	-- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:07:15.067   23:40:45	-- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:07:15.067   23:40:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:15.067   23:40:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.067   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.067  ************************************
00:07:15.067  START TEST rpc_integrity
00:07:15.067  ************************************
00:07:15.067   23:40:45	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:07:15.067    23:40:45	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:15.067    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.067    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.067    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.067   23:40:45	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:15.067    23:40:45	-- rpc/rpc.sh@13 -- # jq length
00:07:15.067   23:40:45	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:15.067    23:40:45	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:15.067    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.067    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.067    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.067   23:40:45	-- rpc/rpc.sh@15 -- # malloc=Malloc0
00:07:15.067    23:40:45	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:15.067    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.067    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.067    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.067   23:40:45	-- rpc/rpc.sh@16 -- # bdevs='[
00:07:15.067  {
00:07:15.067  "name": "Malloc0",
00:07:15.067  "aliases": [
00:07:15.067  "8d8b75a9-24e8-469c-80ab-a4cb324509a1"
00:07:15.067  ],
00:07:15.067  "product_name": "Malloc disk",
00:07:15.067  "block_size": 512,
00:07:15.067  "num_blocks": 16384,
00:07:15.067  "uuid": "8d8b75a9-24e8-469c-80ab-a4cb324509a1",
00:07:15.067  "assigned_rate_limits": {
00:07:15.067  "rw_ios_per_sec": 0,
00:07:15.067  "rw_mbytes_per_sec": 0,
00:07:15.067  "r_mbytes_per_sec": 0,
00:07:15.067  "w_mbytes_per_sec": 0
00:07:15.067  },
00:07:15.067  "claimed": false,
00:07:15.067  "zoned": false,
00:07:15.067  "supported_io_types": {
00:07:15.067  "read": true,
00:07:15.067  "write": true,
00:07:15.067  "unmap": true,
00:07:15.067  "write_zeroes": true,
00:07:15.067  "flush": true,
00:07:15.067  "reset": true,
00:07:15.067  "compare": false,
00:07:15.067  "compare_and_write": false,
00:07:15.067  "abort": true,
00:07:15.067  "nvme_admin": false,
00:07:15.067  "nvme_io": false
00:07:15.067  },
00:07:15.067  "memory_domains": [
00:07:15.067  {
00:07:15.067  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:15.067  "dma_device_type": 2
00:07:15.067  }
00:07:15.067  ],
00:07:15.067  "driver_specific": {}
00:07:15.067  }
00:07:15.067  ]'
00:07:15.067    23:40:45	-- rpc/rpc.sh@17 -- # jq length
00:07:15.067   23:40:45	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:15.067   23:40:45	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:07:15.067   23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.067   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.067  [2024-12-13 23:40:45.725096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:07:15.067  [2024-12-13 23:40:45.725206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:15.068  [2024-12-13 23:40:45.725252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80
00:07:15.068  [2024-12-13 23:40:45.725276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:15.068  [2024-12-13 23:40:45.727600] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:15.068  [2024-12-13 23:40:45.727684] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:15.068  Passthru0
00:07:15.068   23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.068    23:40:45	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:15.068    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.068    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.068    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.068   23:40:45	-- rpc/rpc.sh@20 -- # bdevs='[
00:07:15.068  {
00:07:15.068  "name": "Malloc0",
00:07:15.068  "aliases": [
00:07:15.068  "8d8b75a9-24e8-469c-80ab-a4cb324509a1"
00:07:15.068  ],
00:07:15.068  "product_name": "Malloc disk",
00:07:15.068  "block_size": 512,
00:07:15.068  "num_blocks": 16384,
00:07:15.068  "uuid": "8d8b75a9-24e8-469c-80ab-a4cb324509a1",
00:07:15.068  "assigned_rate_limits": {
00:07:15.068  "rw_ios_per_sec": 0,
00:07:15.068  "rw_mbytes_per_sec": 0,
00:07:15.068  "r_mbytes_per_sec": 0,
00:07:15.068  "w_mbytes_per_sec": 0
00:07:15.068  },
00:07:15.068  "claimed": true,
00:07:15.068  "claim_type": "exclusive_write",
00:07:15.068  "zoned": false,
00:07:15.068  "supported_io_types": {
00:07:15.068  "read": true,
00:07:15.068  "write": true,
00:07:15.068  "unmap": true,
00:07:15.068  "write_zeroes": true,
00:07:15.068  "flush": true,
00:07:15.068  "reset": true,
00:07:15.068  "compare": false,
00:07:15.068  "compare_and_write": false,
00:07:15.068  "abort": true,
00:07:15.068  "nvme_admin": false,
00:07:15.068  "nvme_io": false
00:07:15.068  },
00:07:15.068  "memory_domains": [
00:07:15.068  {
00:07:15.068  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:15.068  "dma_device_type": 2
00:07:15.068  }
00:07:15.068  ],
00:07:15.068  "driver_specific": {}
00:07:15.068  },
00:07:15.068  {
00:07:15.068  "name": "Passthru0",
00:07:15.068  "aliases": [
00:07:15.068  "8872f9cb-9a97-5438-b9c5-aa51baf3de06"
00:07:15.068  ],
00:07:15.068  "product_name": "passthru",
00:07:15.068  "block_size": 512,
00:07:15.068  "num_blocks": 16384,
00:07:15.068  "uuid": "8872f9cb-9a97-5438-b9c5-aa51baf3de06",
00:07:15.068  "assigned_rate_limits": {
00:07:15.068  "rw_ios_per_sec": 0,
00:07:15.068  "rw_mbytes_per_sec": 0,
00:07:15.068  "r_mbytes_per_sec": 0,
00:07:15.068  "w_mbytes_per_sec": 0
00:07:15.068  },
00:07:15.068  "claimed": false,
00:07:15.068  "zoned": false,
00:07:15.068  "supported_io_types": {
00:07:15.068  "read": true,
00:07:15.068  "write": true,
00:07:15.068  "unmap": true,
00:07:15.068  "write_zeroes": true,
00:07:15.068  "flush": true,
00:07:15.068  "reset": true,
00:07:15.068  "compare": false,
00:07:15.068  "compare_and_write": false,
00:07:15.068  "abort": true,
00:07:15.068  "nvme_admin": false,
00:07:15.068  "nvme_io": false
00:07:15.068  },
00:07:15.068  "memory_domains": [
00:07:15.068  {
00:07:15.068  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:15.068  "dma_device_type": 2
00:07:15.068  }
00:07:15.068  ],
00:07:15.068  "driver_specific": {
00:07:15.068  "passthru": {
00:07:15.068  "name": "Passthru0",
00:07:15.068  "base_bdev_name": "Malloc0"
00:07:15.068  }
00:07:15.068  }
00:07:15.068  }
00:07:15.068  ]'
00:07:15.068    23:40:45	-- rpc/rpc.sh@21 -- # jq length
00:07:15.338   23:40:45	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:15.338   23:40:45	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:15.338   23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338   23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338   23:40:45	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:07:15.338   23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338   23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338    23:40:45	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:15.338    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338   23:40:45	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:15.338    23:40:45	-- rpc/rpc.sh@26 -- # jq length
00:07:15.338   23:40:45	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:15.338  
00:07:15.338  real	0m0.314s
00:07:15.338  user	0m0.202s
00:07:15.338  sys	0m0.025s
00:07:15.338   23:40:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.338   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338  ************************************
00:07:15.338  END TEST rpc_integrity
00:07:15.338  ************************************
00:07:15.338   23:40:45	-- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:07:15.338   23:40:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:15.338   23:40:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.338   23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338  ************************************
00:07:15.338  START TEST rpc_plugins
00:07:15.338  ************************************
00:07:15.338   23:40:45	-- common/autotest_common.sh@1114 -- # rpc_plugins
00:07:15.338    23:40:45	-- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:07:15.338    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338   23:40:45	-- rpc/rpc.sh@30 -- # malloc=Malloc1
00:07:15.338    23:40:45	-- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:07:15.338    23:40:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338    23:40:45	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338    23:40:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338   23:40:45	-- rpc/rpc.sh@31 -- # bdevs='[
00:07:15.338  {
00:07:15.338  "name": "Malloc1",
00:07:15.338  "aliases": [
00:07:15.338  "590d54b3-eac1-4787-bb91-58ffc3945dd7"
00:07:15.338  ],
00:07:15.338  "product_name": "Malloc disk",
00:07:15.338  "block_size": 4096,
00:07:15.338  "num_blocks": 256,
00:07:15.338  "uuid": "590d54b3-eac1-4787-bb91-58ffc3945dd7",
00:07:15.338  "assigned_rate_limits": {
00:07:15.338  "rw_ios_per_sec": 0,
00:07:15.338  "rw_mbytes_per_sec": 0,
00:07:15.338  "r_mbytes_per_sec": 0,
00:07:15.338  "w_mbytes_per_sec": 0
00:07:15.338  },
00:07:15.338  "claimed": false,
00:07:15.338  "zoned": false,
00:07:15.338  "supported_io_types": {
00:07:15.338  "read": true,
00:07:15.338  "write": true,
00:07:15.338  "unmap": true,
00:07:15.338  "write_zeroes": true,
00:07:15.338  "flush": true,
00:07:15.338  "reset": true,
00:07:15.338  "compare": false,
00:07:15.338  "compare_and_write": false,
00:07:15.338  "abort": true,
00:07:15.338  "nvme_admin": false,
00:07:15.338  "nvme_io": false
00:07:15.338  },
00:07:15.338  "memory_domains": [
00:07:15.338  {
00:07:15.338  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:15.338  "dma_device_type": 2
00:07:15.338  }
00:07:15.338  ],
00:07:15.338  "driver_specific": {}
00:07:15.338  }
00:07:15.338  ]'
00:07:15.338    23:40:45	-- rpc/rpc.sh@32 -- # jq length
00:07:15.338   23:40:46	-- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:07:15.338   23:40:46	-- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:07:15.338   23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338   23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338    23:40:46	-- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:07:15.338    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.338    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.338    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.338   23:40:46	-- rpc/rpc.sh@35 -- # bdevs='[]'
00:07:15.338    23:40:46	-- rpc/rpc.sh@36 -- # jq length
00:07:15.596   23:40:46	-- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:07:15.596  
00:07:15.596  real	0m0.150s
00:07:15.596  user	0m0.105s
00:07:15.596  sys	0m0.014s
00:07:15.596   23:40:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.596   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.596  ************************************
00:07:15.596  END TEST rpc_plugins
00:07:15.596  ************************************
00:07:15.596   23:40:46	-- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:07:15.596   23:40:46	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:15.596   23:40:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.596   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.596  ************************************
00:07:15.596  START TEST rpc_trace_cmd_test
00:07:15.596  ************************************
00:07:15.596   23:40:46	-- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test
00:07:15.596   23:40:46	-- rpc/rpc.sh@40 -- # local info
00:07:15.596    23:40:46	-- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:07:15.596    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.596    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.596    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.596   23:40:46	-- rpc/rpc.sh@42 -- # info='{
00:07:15.596  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102821",
00:07:15.596  "tpoint_group_mask": "0x8",
00:07:15.596  "iscsi_conn": {
00:07:15.596  "mask": "0x2",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "scsi": {
00:07:15.596  "mask": "0x4",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "bdev": {
00:07:15.596  "mask": "0x8",
00:07:15.596  "tpoint_mask": "0xffffffffffffffff"
00:07:15.596  },
00:07:15.596  "nvmf_rdma": {
00:07:15.596  "mask": "0x10",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "nvmf_tcp": {
00:07:15.596  "mask": "0x20",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "ftl": {
00:07:15.596  "mask": "0x40",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "blobfs": {
00:07:15.596  "mask": "0x80",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "dsa": {
00:07:15.596  "mask": "0x200",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "thread": {
00:07:15.596  "mask": "0x400",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "nvme_pcie": {
00:07:15.596  "mask": "0x800",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "iaa": {
00:07:15.596  "mask": "0x1000",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "nvme_tcp": {
00:07:15.596  "mask": "0x2000",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  },
00:07:15.596  "bdev_nvme": {
00:07:15.596  "mask": "0x4000",
00:07:15.596  "tpoint_mask": "0x0"
00:07:15.596  }
00:07:15.596  }'
00:07:15.596    23:40:46	-- rpc/rpc.sh@43 -- # jq length
00:07:15.596   23:40:46	-- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']'
00:07:15.596    23:40:46	-- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:07:15.596   23:40:46	-- rpc/rpc.sh@44 -- # '[' true = true ']'
00:07:15.596    23:40:46	-- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:07:15.596   23:40:46	-- rpc/rpc.sh@45 -- # '[' true = true ']'
00:07:15.596    23:40:46	-- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:07:15.855   23:40:46	-- rpc/rpc.sh@46 -- # '[' true = true ']'
00:07:15.855    23:40:46	-- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:07:15.855   23:40:46	-- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:07:15.855  
00:07:15.855  real	0m0.259s
00:07:15.855  user	0m0.237s
00:07:15.855  sys	0m0.019s
00:07:15.855  ************************************
00:07:15.855  END TEST rpc_trace_cmd_test
00:07:15.855  ************************************
00:07:15.855   23:40:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.855   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.855   23:40:46	-- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:07:15.855   23:40:46	-- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:07:15.855   23:40:46	-- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:07:15.855   23:40:46	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:15.855   23:40:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.855   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.855  ************************************
00:07:15.855  START TEST rpc_daemon_integrity
00:07:15.855  ************************************
00:07:15.855   23:40:46	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:07:15.855    23:40:46	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:15.855    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.855    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.855    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.855   23:40:46	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:15.855    23:40:46	-- rpc/rpc.sh@13 -- # jq length
00:07:15.855   23:40:46	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:15.855    23:40:46	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:15.855    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.855    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.855    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.855   23:40:46	-- rpc/rpc.sh@15 -- # malloc=Malloc2
00:07:15.855    23:40:46	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:15.855    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:15.855    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:15.855    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:15.855   23:40:46	-- rpc/rpc.sh@16 -- # bdevs='[
00:07:15.855  {
00:07:15.855  "name": "Malloc2",
00:07:15.855  "aliases": [
00:07:15.855  "aff4b96e-a13d-4417-b238-b9cace29db18"
00:07:15.855  ],
00:07:15.855  "product_name": "Malloc disk",
00:07:15.855  "block_size": 512,
00:07:15.855  "num_blocks": 16384,
00:07:15.855  "uuid": "aff4b96e-a13d-4417-b238-b9cace29db18",
00:07:15.855  "assigned_rate_limits": {
00:07:15.855  "rw_ios_per_sec": 0,
00:07:15.855  "rw_mbytes_per_sec": 0,
00:07:15.855  "r_mbytes_per_sec": 0,
00:07:15.855  "w_mbytes_per_sec": 0
00:07:15.855  },
00:07:15.855  "claimed": false,
00:07:15.855  "zoned": false,
00:07:15.855  "supported_io_types": {
00:07:15.855  "read": true,
00:07:15.855  "write": true,
00:07:15.855  "unmap": true,
00:07:15.855  "write_zeroes": true,
00:07:15.855  "flush": true,
00:07:15.855  "reset": true,
00:07:15.855  "compare": false,
00:07:15.855  "compare_and_write": false,
00:07:15.855  "abort": true,
00:07:15.855  "nvme_admin": false,
00:07:15.855  "nvme_io": false
00:07:15.855  },
00:07:15.855  "memory_domains": [
00:07:15.855  {
00:07:15.855  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:15.855  "dma_device_type": 2
00:07:15.855  }
00:07:15.855  ],
00:07:15.855  "driver_specific": {}
00:07:15.855  }
00:07:15.855  ]'
00:07:15.855    23:40:46	-- rpc/rpc.sh@17 -- # jq length
00:07:16.114   23:40:46	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:16.114   23:40:46	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:07:16.114   23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.114   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.114  [2024-12-13 23:40:46.594136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:07:16.114  [2024-12-13 23:40:46.594235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:16.114  [2024-12-13 23:40:46.594281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:07:16.114  [2024-12-13 23:40:46.594304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:16.114  [2024-12-13 23:40:46.596887] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:16.114  [2024-12-13 23:40:46.596975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:16.114  Passthru0
00:07:16.114   23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.114    23:40:46	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:16.114    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.114    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.114    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.114   23:40:46	-- rpc/rpc.sh@20 -- # bdevs='[
00:07:16.114  {
00:07:16.114  "name": "Malloc2",
00:07:16.114  "aliases": [
00:07:16.114  "aff4b96e-a13d-4417-b238-b9cace29db18"
00:07:16.114  ],
00:07:16.114  "product_name": "Malloc disk",
00:07:16.114  "block_size": 512,
00:07:16.114  "num_blocks": 16384,
00:07:16.114  "uuid": "aff4b96e-a13d-4417-b238-b9cace29db18",
00:07:16.114  "assigned_rate_limits": {
00:07:16.114  "rw_ios_per_sec": 0,
00:07:16.114  "rw_mbytes_per_sec": 0,
00:07:16.114  "r_mbytes_per_sec": 0,
00:07:16.114  "w_mbytes_per_sec": 0
00:07:16.114  },
00:07:16.114  "claimed": true,
00:07:16.114  "claim_type": "exclusive_write",
00:07:16.114  "zoned": false,
00:07:16.114  "supported_io_types": {
00:07:16.114  "read": true,
00:07:16.114  "write": true,
00:07:16.114  "unmap": true,
00:07:16.114  "write_zeroes": true,
00:07:16.114  "flush": true,
00:07:16.114  "reset": true,
00:07:16.114  "compare": false,
00:07:16.114  "compare_and_write": false,
00:07:16.114  "abort": true,
00:07:16.115  "nvme_admin": false,
00:07:16.115  "nvme_io": false
00:07:16.115  },
00:07:16.115  "memory_domains": [
00:07:16.115  {
00:07:16.115  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:16.115  "dma_device_type": 2
00:07:16.115  }
00:07:16.115  ],
00:07:16.115  "driver_specific": {}
00:07:16.115  },
00:07:16.115  {
00:07:16.115  "name": "Passthru0",
00:07:16.115  "aliases": [
00:07:16.115  "384a02d2-b574-5ae2-a3f2-e6b2cefb9181"
00:07:16.115  ],
00:07:16.115  "product_name": "passthru",
00:07:16.115  "block_size": 512,
00:07:16.115  "num_blocks": 16384,
00:07:16.115  "uuid": "384a02d2-b574-5ae2-a3f2-e6b2cefb9181",
00:07:16.115  "assigned_rate_limits": {
00:07:16.115  "rw_ios_per_sec": 0,
00:07:16.115  "rw_mbytes_per_sec": 0,
00:07:16.115  "r_mbytes_per_sec": 0,
00:07:16.115  "w_mbytes_per_sec": 0
00:07:16.115  },
00:07:16.115  "claimed": false,
00:07:16.115  "zoned": false,
00:07:16.115  "supported_io_types": {
00:07:16.115  "read": true,
00:07:16.115  "write": true,
00:07:16.115  "unmap": true,
00:07:16.115  "write_zeroes": true,
00:07:16.115  "flush": true,
00:07:16.115  "reset": true,
00:07:16.115  "compare": false,
00:07:16.115  "compare_and_write": false,
00:07:16.115  "abort": true,
00:07:16.115  "nvme_admin": false,
00:07:16.115  "nvme_io": false
00:07:16.115  },
00:07:16.115  "memory_domains": [
00:07:16.115  {
00:07:16.115  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:16.115  "dma_device_type": 2
00:07:16.115  }
00:07:16.115  ],
00:07:16.115  "driver_specific": {
00:07:16.115  "passthru": {
00:07:16.115  "name": "Passthru0",
00:07:16.115  "base_bdev_name": "Malloc2"
00:07:16.115  }
00:07:16.115  }
00:07:16.115  }
00:07:16.115  ]'
00:07:16.115    23:40:46	-- rpc/rpc.sh@21 -- # jq length
00:07:16.115   23:40:46	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:16.115   23:40:46	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:16.115   23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.115   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.115   23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.115   23:40:46	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:07:16.115   23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.115   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.115   23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.115    23:40:46	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:16.115    23:40:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.115    23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.115    23:40:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.115   23:40:46	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:16.115    23:40:46	-- rpc/rpc.sh@26 -- # jq length
00:07:16.115   23:40:46	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:16.115  
00:07:16.115  real	0m0.305s
00:07:16.115  user	0m0.209s
00:07:16.115  sys	0m0.022s
00:07:16.115   23:40:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:16.115   23:40:46	-- common/autotest_common.sh@10 -- # set +x
00:07:16.115  ************************************
00:07:16.115  END TEST rpc_daemon_integrity
00:07:16.115  ************************************
00:07:16.115   23:40:46	-- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:07:16.115   23:40:46	-- rpc/rpc.sh@84 -- # killprocess 102821
00:07:16.115   23:40:46	-- common/autotest_common.sh@936 -- # '[' -z 102821 ']'
00:07:16.115   23:40:46	-- common/autotest_common.sh@940 -- # kill -0 102821
00:07:16.115    23:40:46	-- common/autotest_common.sh@941 -- # uname
00:07:16.115   23:40:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:16.115    23:40:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102821
00:07:16.115   23:40:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:16.115   23:40:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:16.115   23:40:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 102821'
00:07:16.115  killing process with pid 102821
00:07:16.115   23:40:46	-- common/autotest_common.sh@955 -- # kill 102821
00:07:16.115   23:40:46	-- common/autotest_common.sh@960 -- # wait 102821
00:07:18.646  
00:07:18.646  real	0m5.157s
00:07:18.646  user	0m6.005s
00:07:18.646  sys	0m0.850s
00:07:18.646   23:40:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:18.646   23:40:48	-- common/autotest_common.sh@10 -- # set +x
00:07:18.646  ************************************
00:07:18.646  END TEST rpc
00:07:18.646  ************************************
00:07:18.646   23:40:48	-- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:07:18.646   23:40:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:18.646   23:40:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:18.646   23:40:48	-- common/autotest_common.sh@10 -- # set +x
00:07:18.646  ************************************
00:07:18.646  START TEST rpc_client
00:07:18.646  ************************************
00:07:18.646   23:40:48	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:07:18.646  * Looking for test storage...
00:07:18.646  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:07:18.646    23:40:48	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:18.646     23:40:48	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:18.646     23:40:48	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:18.646    23:40:48	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:18.646    23:40:48	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:18.646    23:40:48	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:18.646    23:40:48	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:18.646    23:40:48	-- scripts/common.sh@335 -- # IFS=.-:
00:07:18.646    23:40:48	-- scripts/common.sh@335 -- # read -ra ver1
00:07:18.646    23:40:48	-- scripts/common.sh@336 -- # IFS=.-:
00:07:18.646    23:40:48	-- scripts/common.sh@336 -- # read -ra ver2
00:07:18.646    23:40:48	-- scripts/common.sh@337 -- # local 'op=<'
00:07:18.646    23:40:48	-- scripts/common.sh@339 -- # ver1_l=2
00:07:18.646    23:40:48	-- scripts/common.sh@340 -- # ver2_l=1
00:07:18.646    23:40:48	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:18.646    23:40:48	-- scripts/common.sh@343 -- # case "$op" in
00:07:18.646    23:40:48	-- scripts/common.sh@344 -- # : 1
00:07:18.646    23:40:48	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:18.646    23:40:48	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:18.646     23:40:48	-- scripts/common.sh@364 -- # decimal 1
00:07:18.646     23:40:48	-- scripts/common.sh@352 -- # local d=1
00:07:18.646     23:40:48	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:18.646     23:40:48	-- scripts/common.sh@354 -- # echo 1
00:07:18.646    23:40:48	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:18.646     23:40:48	-- scripts/common.sh@365 -- # decimal 2
00:07:18.646     23:40:48	-- scripts/common.sh@352 -- # local d=2
00:07:18.646     23:40:48	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:18.646     23:40:48	-- scripts/common.sh@354 -- # echo 2
00:07:18.646    23:40:48	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:18.646    23:40:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:18.646    23:40:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:18.646    23:40:49	-- scripts/common.sh@367 -- # return 0
00:07:18.646    23:40:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:18.646    23:40:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:18.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.646  		--rc genhtml_branch_coverage=1
00:07:18.646  		--rc genhtml_function_coverage=1
00:07:18.646  		--rc genhtml_legend=1
00:07:18.646  		--rc geninfo_all_blocks=1
00:07:18.646  		--rc geninfo_unexecuted_blocks=1
00:07:18.646  		
00:07:18.646  		'
00:07:18.646    23:40:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:18.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.646  		--rc genhtml_branch_coverage=1
00:07:18.646  		--rc genhtml_function_coverage=1
00:07:18.646  		--rc genhtml_legend=1
00:07:18.646  		--rc geninfo_all_blocks=1
00:07:18.646  		--rc geninfo_unexecuted_blocks=1
00:07:18.646  		
00:07:18.646  		'
00:07:18.646    23:40:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:18.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.646  		--rc genhtml_branch_coverage=1
00:07:18.646  		--rc genhtml_function_coverage=1
00:07:18.646  		--rc genhtml_legend=1
00:07:18.646  		--rc geninfo_all_blocks=1
00:07:18.646  		--rc geninfo_unexecuted_blocks=1
00:07:18.646  		
00:07:18.646  		'
00:07:18.646    23:40:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:18.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.646  		--rc genhtml_branch_coverage=1
00:07:18.646  		--rc genhtml_function_coverage=1
00:07:18.646  		--rc genhtml_legend=1
00:07:18.646  		--rc geninfo_all_blocks=1
00:07:18.646  		--rc geninfo_unexecuted_blocks=1
00:07:18.646  		
00:07:18.646  		'
00:07:18.646   23:40:49	-- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:07:18.646  OK
00:07:18.646   23:40:49	-- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:07:18.646  
00:07:18.646  real	0m0.233s
00:07:18.646  user	0m0.174s
00:07:18.646  sys	0m0.076s
00:07:18.646   23:40:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:18.646   23:40:49	-- common/autotest_common.sh@10 -- # set +x
00:07:18.646  ************************************
00:07:18.646  END TEST rpc_client
00:07:18.646  ************************************
00:07:18.647   23:40:49	-- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:07:18.647   23:40:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:18.647   23:40:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:18.647   23:40:49	-- common/autotest_common.sh@10 -- # set +x
00:07:18.647  ************************************
00:07:18.647  START TEST json_config
00:07:18.647  ************************************
00:07:18.647   23:40:49	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:07:18.647    23:40:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:18.647     23:40:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:18.647     23:40:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:18.647    23:40:49	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:18.647    23:40:49	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:18.647    23:40:49	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:18.647    23:40:49	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:18.647    23:40:49	-- scripts/common.sh@335 -- # IFS=.-:
00:07:18.647    23:40:49	-- scripts/common.sh@335 -- # read -ra ver1
00:07:18.647    23:40:49	-- scripts/common.sh@336 -- # IFS=.-:
00:07:18.647    23:40:49	-- scripts/common.sh@336 -- # read -ra ver2
00:07:18.647    23:40:49	-- scripts/common.sh@337 -- # local 'op=<'
00:07:18.647    23:40:49	-- scripts/common.sh@339 -- # ver1_l=2
00:07:18.647    23:40:49	-- scripts/common.sh@340 -- # ver2_l=1
00:07:18.647    23:40:49	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:18.647    23:40:49	-- scripts/common.sh@343 -- # case "$op" in
00:07:18.647    23:40:49	-- scripts/common.sh@344 -- # : 1
00:07:18.647    23:40:49	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:18.647    23:40:49	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:18.647     23:40:49	-- scripts/common.sh@364 -- # decimal 1
00:07:18.647     23:40:49	-- scripts/common.sh@352 -- # local d=1
00:07:18.647     23:40:49	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:18.647     23:40:49	-- scripts/common.sh@354 -- # echo 1
00:07:18.647    23:40:49	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:18.647     23:40:49	-- scripts/common.sh@365 -- # decimal 2
00:07:18.647     23:40:49	-- scripts/common.sh@352 -- # local d=2
00:07:18.647     23:40:49	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:18.647     23:40:49	-- scripts/common.sh@354 -- # echo 2
00:07:18.647    23:40:49	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:18.647    23:40:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:18.647    23:40:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:18.647    23:40:49	-- scripts/common.sh@367 -- # return 0
00:07:18.647    23:40:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:18.647    23:40:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:18.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.647  		--rc genhtml_branch_coverage=1
00:07:18.647  		--rc genhtml_function_coverage=1
00:07:18.647  		--rc genhtml_legend=1
00:07:18.647  		--rc geninfo_all_blocks=1
00:07:18.647  		--rc geninfo_unexecuted_blocks=1
00:07:18.647  		
00:07:18.647  		'
00:07:18.647    23:40:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:18.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.647  		--rc genhtml_branch_coverage=1
00:07:18.647  		--rc genhtml_function_coverage=1
00:07:18.647  		--rc genhtml_legend=1
00:07:18.647  		--rc geninfo_all_blocks=1
00:07:18.647  		--rc geninfo_unexecuted_blocks=1
00:07:18.647  		
00:07:18.647  		'
00:07:18.647    23:40:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:18.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.647  		--rc genhtml_branch_coverage=1
00:07:18.647  		--rc genhtml_function_coverage=1
00:07:18.647  		--rc genhtml_legend=1
00:07:18.647  		--rc geninfo_all_blocks=1
00:07:18.647  		--rc geninfo_unexecuted_blocks=1
00:07:18.647  		
00:07:18.647  		'
00:07:18.647    23:40:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:18.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.647  		--rc genhtml_branch_coverage=1
00:07:18.647  		--rc genhtml_function_coverage=1
00:07:18.647  		--rc genhtml_legend=1
00:07:18.647  		--rc geninfo_all_blocks=1
00:07:18.647  		--rc geninfo_unexecuted_blocks=1
00:07:18.647  		
00:07:18.647  		'
00:07:18.647   23:40:49	-- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:18.647     23:40:49	-- nvmf/common.sh@7 -- # uname -s
00:07:18.647    23:40:49	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:18.647    23:40:49	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:18.647    23:40:49	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:18.647    23:40:49	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:18.647    23:40:49	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:18.647    23:40:49	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:18.647    23:40:49	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:18.647    23:40:49	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:18.647    23:40:49	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:18.647     23:40:49	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:18.647    23:40:49	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4b987259-2b2e-4b23-8b74-6ff63a6e4318
00:07:18.647    23:40:49	-- nvmf/common.sh@18 -- # NVME_HOSTID=4b987259-2b2e-4b23-8b74-6ff63a6e4318
00:07:18.647    23:40:49	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:18.647    23:40:49	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:18.647    23:40:49	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:18.647    23:40:49	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:18.647     23:40:49	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:18.647     23:40:49	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:18.647     23:40:49	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:18.647      23:40:49	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:18.647      23:40:49	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:18.647      23:40:49	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:18.647      23:40:49	-- paths/export.sh@5 -- # export PATH
00:07:18.647      23:40:49	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:18.647    23:40:49	-- nvmf/common.sh@46 -- # : 0
00:07:18.647    23:40:49	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:18.647    23:40:49	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:18.647    23:40:49	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:18.647    23:40:49	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:18.647    23:40:49	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:18.647    23:40:49	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:18.647    23:40:49	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:18.647    23:40:49	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:18.647   23:40:49	-- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]]
00:07:18.647   23:40:49	-- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]]
00:07:18.647   23:40:49	-- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]]
00:07:18.647   23:40:49	-- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:07:18.647   23:40:49	-- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='')
00:07:18.647   23:40:49	-- json_config/json_config.sh@30 -- # declare -A app_pid
00:07:18.647   23:40:49	-- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:07:18.647   23:40:49	-- json_config/json_config.sh@31 -- # declare -A app_socket
00:07:18.647   23:40:49	-- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:07:18.647   23:40:49	-- json_config/json_config.sh@32 -- # declare -A app_params
00:07:18.647   23:40:49	-- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:07:18.647   23:40:49	-- json_config/json_config.sh@33 -- # declare -A configs_path
00:07:18.648   23:40:49	-- json_config/json_config.sh@43 -- # last_event_id=0
00:07:18.648   23:40:49	-- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:18.648  INFO: JSON configuration test init
00:07:18.648   23:40:49	-- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init'
00:07:18.648   23:40:49	-- json_config/json_config.sh@420 -- # json_config_test_init
00:07:18.648   23:40:49	-- json_config/json_config.sh@315 -- # timing_enter json_config_test_init
00:07:18.648   23:40:49	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:18.648   23:40:49	-- common/autotest_common.sh@10 -- # set +x
00:07:18.648   23:40:49	-- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target
00:07:18.648   23:40:49	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:18.648   23:40:49	-- common/autotest_common.sh@10 -- # set +x
00:07:18.648   23:40:49	-- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc
00:07:18.648   23:40:49	-- json_config/json_config.sh@98 -- # local app=target
00:07:18.648   23:40:49	-- json_config/json_config.sh@99 -- # shift
00:07:18.648   23:40:49	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:07:18.648   23:40:49	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:07:18.648   23:40:49	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:07:18.648   23:40:49	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:07:18.648   23:40:49	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:07:18.648   23:40:49	-- json_config/json_config.sh@111 -- # app_pid[$app]=103142
00:07:18.648  Waiting for target to run...
00:07:18.648   23:40:49	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:07:18.648   23:40:49	-- json_config/json_config.sh@114 -- # waitforlisten 103142 /var/tmp/spdk_tgt.sock
00:07:18.648   23:40:49	-- common/autotest_common.sh@829 -- # '[' -z 103142 ']'
00:07:18.648   23:40:49	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:07:18.648   23:40:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:18.648   23:40:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:18.648   23:40:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:18.648  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:18.648   23:40:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:18.648   23:40:49	-- common/autotest_common.sh@10 -- # set +x
00:07:18.648  [2024-12-13 23:40:49.370983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:18.648  [2024-12-13 23:40:49.371190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103142 ]
00:07:19.214  [2024-12-13 23:40:49.928800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:19.486  [2024-12-13 23:40:50.097607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:19.486  [2024-12-13 23:40:50.097927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:19.744   23:40:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:19.744   23:40:50	-- common/autotest_common.sh@862 -- # return 0
00:07:19.744  
00:07:19.744   23:40:50	-- json_config/json_config.sh@115 -- # echo ''
00:07:19.744   23:40:50	-- json_config/json_config.sh@322 -- # create_accel_config
00:07:19.744   23:40:50	-- json_config/json_config.sh@146 -- # timing_enter create_accel_config
00:07:19.744   23:40:50	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:19.744   23:40:50	-- common/autotest_common.sh@10 -- # set +x
00:07:19.744   23:40:50	-- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]]
00:07:19.744   23:40:50	-- json_config/json_config.sh@154 -- # timing_exit create_accel_config
00:07:19.744   23:40:50	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:19.744   23:40:50	-- common/autotest_common.sh@10 -- # set +x
00:07:19.744   23:40:50	-- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:07:19.744   23:40:50	-- json_config/json_config.sh@327 -- # tgt_rpc load_config
00:07:19.744   23:40:50	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:07:20.679   23:40:51	-- json_config/json_config.sh@329 -- # tgt_check_notification_types
00:07:20.679   23:40:51	-- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types
00:07:20.679   23:40:51	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:20.679   23:40:51	-- common/autotest_common.sh@10 -- # set +x
00:07:20.679   23:40:51	-- json_config/json_config.sh@48 -- # local ret=0
00:07:20.679   23:40:51	-- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:07:20.679   23:40:51	-- json_config/json_config.sh@49 -- # local enabled_types
00:07:20.679    23:40:51	-- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:07:20.679    23:40:51	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:07:20.679    23:40:51	-- json_config/json_config.sh@51 -- # jq -r '.[]'
00:07:20.937   23:40:51	-- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister')
00:07:20.938   23:40:51	-- json_config/json_config.sh@51 -- # local get_types
00:07:20.938   23:40:51	-- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]]
00:07:20.938   23:40:51	-- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types
00:07:20.938   23:40:51	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:20.938   23:40:51	-- common/autotest_common.sh@10 -- # set +x
00:07:20.938   23:40:51	-- json_config/json_config.sh@58 -- # return 0
00:07:20.938   23:40:51	-- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]]
00:07:20.938   23:40:51	-- json_config/json_config.sh@332 -- # create_bdev_subsystem_config
00:07:20.938   23:40:51	-- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config
00:07:20.938   23:40:51	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:20.938   23:40:51	-- common/autotest_common.sh@10 -- # set +x
00:07:20.938   23:40:51	-- json_config/json_config.sh@160 -- # expected_notifications=()
00:07:20.938   23:40:51	-- json_config/json_config.sh@160 -- # local expected_notifications
00:07:20.938   23:40:51	-- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications))
00:07:20.938    23:40:51	-- json_config/json_config.sh@164 -- # get_notifications
00:07:20.938    23:40:51	-- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id
00:07:20.938    23:40:51	-- json_config/json_config.sh@64 -- # IFS=:
00:07:20.938    23:40:51	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:20.938     23:40:51	-- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0
00:07:20.938     23:40:51	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:07:20.938     23:40:51	-- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:07:21.195    23:40:51	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1
00:07:21.195    23:40:51	-- json_config/json_config.sh@64 -- # IFS=:
00:07:21.196    23:40:51	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:21.196   23:40:51	-- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]]
00:07:21.196   23:40:51	-- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1
00:07:21.196   23:40:51	-- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:07:21.196   23:40:51	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:07:21.454  Nvme0n1p0 Nvme0n1p1
00:07:21.454   23:40:51	-- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3
00:07:21.454   23:40:51	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:07:21.713  [2024-12-13 23:40:52.207568] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:07:21.713  [2024-12-13 23:40:52.207669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:07:21.713  
00:07:21.713   23:40:52	-- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:07:21.713   23:40:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:07:21.713  Malloc3
00:07:21.713   23:40:52	-- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:07:21.713   23:40:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:07:21.972  [2024-12-13 23:40:52.594549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:07:21.972  [2024-12-13 23:40:52.594643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:21.972  [2024-12-13 23:40:52.594679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:07:21.972  [2024-12-13 23:40:52.594742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:21.972  [2024-12-13 23:40:52.597251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:21.972  [2024-12-13 23:40:52.597335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:07:21.972  PTBdevFromMalloc3
00:07:21.972   23:40:52	-- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512
00:07:21.972   23:40:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:07:22.230  Null0
00:07:22.230   23:40:52	-- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:07:22.230   23:40:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:07:22.488  Malloc0
00:07:22.488   23:40:53	-- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:07:22.488   23:40:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:07:22.746  Malloc1
00:07:22.746   23:40:53	-- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:07:22.746   23:40:53	-- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:07:23.005  102400+0 records in
00:07:23.005  102400+0 records out
00:07:23.005  104857600 bytes (105 MB, 100 MiB) copied, 0.28815 s, 364 MB/s
00:07:23.005   23:40:53	-- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:07:23.005   23:40:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:07:23.264  aio_disk
00:07:23.264   23:40:53	-- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk)
00:07:23.264   23:40:53	-- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:07:23.264   23:40:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:07:23.264  ad2d1c38-6837-48ed-a277-9a910cfe1f7d
00:07:23.264   23:40:53	-- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:07:23.264    23:40:53	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:07:23.264    23:40:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:07:23.522    23:40:54	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:07:23.523    23:40:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:07:23.781    23:40:54	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:07:23.781    23:40:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:07:24.039    23:40:54	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:07:24.039    23:40:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:07:24.297   23:40:54	-- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]]
00:07:24.297   23:40:54	-- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]]
00:07:24.297   23:40:54	-- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23 bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205 bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae bdev_register:8fb8880e-0934-405b-b702-506ebc2378da
00:07:24.297   23:40:54	-- json_config/json_config.sh@70 -- # local events_to_check
00:07:24.297   23:40:54	-- json_config/json_config.sh@71 -- # local recorded_events
00:07:24.297   23:40:54	-- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:07:24.297    23:40:54	-- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23 bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205 bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae bdev_register:8fb8880e-0934-405b-b702-506ebc2378da
00:07:24.297    23:40:54	-- json_config/json_config.sh@74 -- # sort
00:07:24.297   23:40:54	-- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort))
00:07:24.297    23:40:54	-- json_config/json_config.sh@75 -- # get_notifications
00:07:24.297    23:40:54	-- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id
00:07:24.297    23:40:54	-- json_config/json_config.sh@75 -- # sort
00:07:24.297    23:40:54	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.297    23:40:54	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.297     23:40:54	-- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0
00:07:24.297     23:40:54	-- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:07:24.297     23:40:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Null0
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556    23:40:55	-- json_config/json_config.sh@65 -- # echo bdev_register:8fb8880e-0934-405b-b702-506ebc2378da
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # IFS=:
00:07:24.556    23:40:55	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:07:24.556   23:40:55	-- json_config/json_config.sh@77 -- # [[ bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23 bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205 bdev_register:8fb8880e-0934-405b-b702-506ebc2378da bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\8\4\4\c\e\3\9\-\d\a\e\b\-\4\7\a\5\-\8\b\c\e\-\e\2\4\a\6\2\8\1\e\a\2\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\b\6\2\a\0\b\2\-\2\2\6\5\-\4\0\b\a\-\b\6\2\9\-\6\c\4\b\d\c\d\4\8\2\0\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\f\b\8\8\8\0\e\-\0\9\3\4\-\4\0\5\b\-\b\7\0\2\-\5\0\6\e\b\c\2\3\7\8\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\2\f\0\6\1\4\1\-\c\8\3\1\-\4\0\7\6\-\9\b\3\b\-\3\d\d\3\9\3\d\0\4\5\a\e ]]
00:07:24.556   23:40:55	-- json_config/json_config.sh@89 -- # cat
00:07:24.556    23:40:55	-- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23 bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205 bdev_register:8fb8880e-0934-405b-b702-506ebc2378da bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae
00:07:24.556  Expected events matched:
00:07:24.556   bdev_register:0844ce39-daeb-47a5-8bce-e24a6281ea23
00:07:24.556   bdev_register:4b62a0b2-2265-40ba-b629-6c4bdcd48205
00:07:24.556   bdev_register:8fb8880e-0934-405b-b702-506ebc2378da
00:07:24.556   bdev_register:Malloc0
00:07:24.556   bdev_register:Malloc0p0
00:07:24.556   bdev_register:Malloc0p1
00:07:24.556   bdev_register:Malloc0p2
00:07:24.556   bdev_register:Malloc1
00:07:24.556   bdev_register:Malloc3
00:07:24.556   bdev_register:Null0
00:07:24.556   bdev_register:Nvme0n1
00:07:24.556   bdev_register:Nvme0n1p0
00:07:24.556   bdev_register:Nvme0n1p1
00:07:24.556   bdev_register:PTBdevFromMalloc3
00:07:24.556   bdev_register:aio_disk
00:07:24.556   bdev_register:e2f06141-c831-4076-9b3b-3dd393d045ae
00:07:24.556   23:40:55	-- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config
00:07:24.556   23:40:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:24.556   23:40:55	-- common/autotest_common.sh@10 -- # set +x
00:07:24.556   23:40:55	-- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]]
00:07:24.556   23:40:55	-- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]]
00:07:24.556   23:40:55	-- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]]
00:07:24.556   23:40:55	-- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target
00:07:24.556   23:40:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:24.556   23:40:55	-- common/autotest_common.sh@10 -- # set +x
00:07:24.556   23:40:55	-- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]]
00:07:24.556   23:40:55	-- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:24.556   23:40:55	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:24.815  MallocBdevForConfigChangeCheck
00:07:24.815   23:40:55	-- json_config/json_config.sh@355 -- # timing_exit json_config_test_init
00:07:24.815   23:40:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:24.815   23:40:55	-- common/autotest_common.sh@10 -- # set +x
00:07:24.815   23:40:55	-- json_config/json_config.sh@422 -- # tgt_rpc save_config
00:07:24.815   23:40:55	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:25.382  INFO: shutting down applications...
00:07:25.382   23:40:55	-- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...'
00:07:25.382   23:40:55	-- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]]
00:07:25.382   23:40:55	-- json_config/json_config.sh@431 -- # json_config_clear target
00:07:25.382   23:40:55	-- json_config/json_config.sh@385 -- # [[ -n 22 ]]
00:07:25.382   23:40:55	-- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:07:25.382  [2024-12-13 23:40:56.009265] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:07:25.640  Calling clear_vhost_scsi_subsystem
00:07:25.640  Calling clear_iscsi_subsystem
00:07:25.640  Calling clear_vhost_blk_subsystem
00:07:25.640  Calling clear_nbd_subsystem
00:07:25.640  Calling clear_nvmf_subsystem
00:07:25.640  Calling clear_bdev_subsystem
00:07:25.640  Calling clear_accel_subsystem
00:07:25.640  Calling clear_iobuf_subsystem
00:07:25.640  Calling clear_sock_subsystem
00:07:25.640  Calling clear_vmd_subsystem
00:07:25.640  Calling clear_scheduler_subsystem
00:07:25.640   23:40:56	-- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:07:25.640   23:40:56	-- json_config/json_config.sh@396 -- # count=100
00:07:25.640   23:40:56	-- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']'
00:07:25.640   23:40:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:25.640   23:40:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:07:25.640   23:40:56	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:07:25.898   23:40:56	-- json_config/json_config.sh@398 -- # break
00:07:25.898   23:40:56	-- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']'
00:07:25.898   23:40:56	-- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target
00:07:25.899   23:40:56	-- json_config/json_config.sh@120 -- # local app=target
00:07:25.899   23:40:56	-- json_config/json_config.sh@123 -- # [[ -n 22 ]]
00:07:25.899   23:40:56	-- json_config/json_config.sh@124 -- # [[ -n 103142 ]]
00:07:25.899   23:40:56	-- json_config/json_config.sh@127 -- # kill -SIGINT 103142
00:07:25.899   23:40:56	-- json_config/json_config.sh@129 -- # (( i = 0 ))
00:07:25.899   23:40:56	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:07:25.899   23:40:56	-- json_config/json_config.sh@130 -- # kill -0 103142
00:07:25.899   23:40:56	-- json_config/json_config.sh@134 -- # sleep 0.5
00:07:26.465   23:40:57	-- json_config/json_config.sh@129 -- # (( i++ ))
00:07:26.465   23:40:57	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:07:26.465   23:40:57	-- json_config/json_config.sh@130 -- # kill -0 103142
00:07:26.465   23:40:57	-- json_config/json_config.sh@134 -- # sleep 0.5
00:07:27.033   23:40:57	-- json_config/json_config.sh@129 -- # (( i++ ))
00:07:27.033   23:40:57	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:07:27.033   23:40:57	-- json_config/json_config.sh@130 -- # kill -0 103142
00:07:27.033   23:40:57	-- json_config/json_config.sh@131 -- # app_pid[$app]=
00:07:27.033   23:40:57	-- json_config/json_config.sh@132 -- # break
00:07:27.033   23:40:57	-- json_config/json_config.sh@137 -- # [[ -n '' ]]
00:07:27.033  SPDK target shutdown done
00:07:27.033   23:40:57	-- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done'
00:07:27.033  INFO: relaunching applications...
00:07:27.033   23:40:57	-- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...'
00:07:27.033   23:40:57	-- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:27.033   23:40:57	-- json_config/json_config.sh@98 -- # local app=target
00:07:27.033   23:40:57	-- json_config/json_config.sh@99 -- # shift
00:07:27.033   23:40:57	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:07:27.033   23:40:57	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:07:27.033   23:40:57	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:07:27.033   23:40:57	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:07:27.033   23:40:57	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:07:27.033   23:40:57	-- json_config/json_config.sh@111 -- # app_pid[$app]=103399
00:07:27.033   23:40:57	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:07:27.033   23:40:57	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:27.033  Waiting for target to run...
00:07:27.033   23:40:57	-- json_config/json_config.sh@114 -- # waitforlisten 103399 /var/tmp/spdk_tgt.sock
00:07:27.033   23:40:57	-- common/autotest_common.sh@829 -- # '[' -z 103399 ']'
00:07:27.033   23:40:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:27.033   23:40:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:27.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:27.033   23:40:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:27.033   23:40:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:27.033   23:40:57	-- common/autotest_common.sh@10 -- # set +x
00:07:27.033  [2024-12-13 23:40:57.675859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:27.033  [2024-12-13 23:40:57.676075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103399 ]
00:07:27.600  [2024-12-13 23:40:58.245397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:27.859  [2024-12-13 23:40:58.447628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:27.859  [2024-12-13 23:40:58.447971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.426  [2024-12-13 23:40:59.038088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:07:28.426  [2024-12-13 23:40:59.038225] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:07:28.426  [2024-12-13 23:40:59.046063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:07:28.426  [2024-12-13 23:40:59.046133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:07:28.426  [2024-12-13 23:40:59.054088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:07:28.426  [2024-12-13 23:40:59.054173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:07:28.426  [2024-12-13 23:40:59.054207] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:07:28.426  [2024-12-13 23:40:59.145305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:07:28.426  [2024-12-13 23:40:59.145413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:28.426  [2024-12-13 23:40:59.145454] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:07:28.426  [2024-12-13 23:40:59.145483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:28.426  [2024-12-13 23:40:59.146089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:28.426  [2024-12-13 23:40:59.146170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:07:28.684   23:40:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:28.684   23:40:59	-- common/autotest_common.sh@862 -- # return 0
00:07:28.684  
00:07:28.684   23:40:59	-- json_config/json_config.sh@115 -- # echo ''
00:07:28.684   23:40:59	-- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]]
00:07:28.684  INFO: Checking if target configuration is the same...
00:07:28.684   23:40:59	-- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...'
00:07:28.684   23:40:59	-- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:28.684    23:40:59	-- json_config/json_config.sh@441 -- # tgt_rpc save_config
00:07:28.684    23:40:59	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:28.684  + '[' 2 -ne 2 ']'
00:07:28.684  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:07:28.684  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:07:28.684  + rootdir=/home/vagrant/spdk_repo/spdk
00:07:28.684  +++ basename /dev/fd/62
00:07:28.684  ++ mktemp /tmp/62.XXX
00:07:28.684  + tmp_file_1=/tmp/62.Z8G
00:07:28.684  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:28.684  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:28.684  + tmp_file_2=/tmp/spdk_tgt_config.json.REG
00:07:28.684  + ret=0
00:07:28.684  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:28.943  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:29.202  + diff -u /tmp/62.Z8G /tmp/spdk_tgt_config.json.REG
00:07:29.202  INFO: JSON config files are the same
00:07:29.202  + echo 'INFO: JSON config files are the same'
00:07:29.202  + rm /tmp/62.Z8G /tmp/spdk_tgt_config.json.REG
00:07:29.202  + exit 0
00:07:29.202   23:40:59	-- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]]
00:07:29.202  INFO: changing configuration and checking if this can be detected...
00:07:29.202   23:40:59	-- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:07:29.202   23:40:59	-- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:29.202   23:40:59	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:29.460   23:40:59	-- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:29.460    23:40:59	-- json_config/json_config.sh@450 -- # tgt_rpc save_config
00:07:29.460    23:40:59	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:29.460  + '[' 2 -ne 2 ']'
00:07:29.460  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:07:29.460  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:07:29.460  + rootdir=/home/vagrant/spdk_repo/spdk
00:07:29.460  +++ basename /dev/fd/62
00:07:29.460  ++ mktemp /tmp/62.XXX
00:07:29.460  + tmp_file_1=/tmp/62.LPj
00:07:29.460  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:29.460  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:29.460  + tmp_file_2=/tmp/spdk_tgt_config.json.ypw
00:07:29.460  + ret=0
00:07:29.461  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:29.718  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:07:29.718  + diff -u /tmp/62.LPj /tmp/spdk_tgt_config.json.ypw
00:07:29.718  + ret=1
00:07:29.718  + echo '=== Start of file: /tmp/62.LPj ==='
00:07:29.718  + cat /tmp/62.LPj
00:07:29.718  + echo '=== End of file: /tmp/62.LPj ==='
00:07:29.718  + echo ''
00:07:29.718  + echo '=== Start of file: /tmp/spdk_tgt_config.json.ypw ==='
00:07:29.718  + cat /tmp/spdk_tgt_config.json.ypw
00:07:29.718  + echo '=== End of file: /tmp/spdk_tgt_config.json.ypw ==='
00:07:29.718  + echo ''
00:07:29.718  + rm /tmp/62.LPj /tmp/spdk_tgt_config.json.ypw
00:07:29.718  + exit 1
00:07:29.718  INFO: configuration change detected.
00:07:29.718   23:41:00	-- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.'
00:07:29.718   23:41:00	-- json_config/json_config.sh@457 -- # json_config_test_fini
00:07:29.718   23:41:00	-- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini
00:07:29.718   23:41:00	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:29.718   23:41:00	-- common/autotest_common.sh@10 -- # set +x
00:07:29.718   23:41:00	-- json_config/json_config.sh@360 -- # local ret=0
00:07:29.718   23:41:00	-- json_config/json_config.sh@362 -- # [[ -n '' ]]
00:07:29.718   23:41:00	-- json_config/json_config.sh@370 -- # [[ -n 103399 ]]
00:07:29.718   23:41:00	-- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config
00:07:29.718   23:41:00	-- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config
00:07:29.718   23:41:00	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:29.718   23:41:00	-- common/autotest_common.sh@10 -- # set +x
00:07:29.718   23:41:00	-- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]]
00:07:29.718   23:41:00	-- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:07:29.718   23:41:00	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:07:29.976   23:41:00	-- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:07:29.976   23:41:00	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:07:30.235   23:41:00	-- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:07:30.235   23:41:00	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:07:30.235   23:41:00	-- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:07:30.235   23:41:00	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:07:30.494    23:41:01	-- json_config/json_config.sh@246 -- # uname -s
00:07:30.494   23:41:01	-- json_config/json_config.sh@246 -- # [[ Linux = Linux ]]
00:07:30.494   23:41:01	-- json_config/json_config.sh@247 -- # rm -f /sample_aio
00:07:30.494   23:41:01	-- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]]
00:07:30.494   23:41:01	-- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config
00:07:30.494   23:41:01	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:30.494   23:41:01	-- common/autotest_common.sh@10 -- # set +x
00:07:30.494   23:41:01	-- json_config/json_config.sh@376 -- # killprocess 103399
00:07:30.494   23:41:01	-- common/autotest_common.sh@936 -- # '[' -z 103399 ']'
00:07:30.494   23:41:01	-- common/autotest_common.sh@940 -- # kill -0 103399
00:07:30.494    23:41:01	-- common/autotest_common.sh@941 -- # uname
00:07:30.494   23:41:01	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:30.494    23:41:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103399
00:07:30.494   23:41:01	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:30.494   23:41:01	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:30.494  killing process with pid 103399
00:07:30.494   23:41:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 103399'
00:07:30.494   23:41:01	-- common/autotest_common.sh@955 -- # kill 103399
00:07:30.494   23:41:01	-- common/autotest_common.sh@960 -- # wait 103399
00:07:31.429   23:41:02	-- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:07:31.429   23:41:02	-- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini
00:07:31.429   23:41:02	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:31.429   23:41:02	-- common/autotest_common.sh@10 -- # set +x
00:07:31.688   23:41:02	-- json_config/json_config.sh@381 -- # return 0
00:07:31.688  INFO: Success
00:07:31.688   23:41:02	-- json_config/json_config.sh@459 -- # echo 'INFO: Success'
00:07:31.688  
00:07:31.688  real	0m13.065s
00:07:31.688  user	0m18.720s
00:07:31.688  sys	0m2.370s
00:07:31.688   23:41:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:31.688   23:41:02	-- common/autotest_common.sh@10 -- # set +x
00:07:31.688  ************************************
00:07:31.688  END TEST json_config
00:07:31.688  ************************************
00:07:31.689   23:41:02	-- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:31.689   23:41:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:31.689   23:41:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:31.689   23:41:02	-- common/autotest_common.sh@10 -- # set +x
00:07:31.689  ************************************
00:07:31.689  START TEST json_config_extra_key
00:07:31.689  ************************************
00:07:31.689   23:41:02	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:31.689    23:41:02	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:31.689     23:41:02	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:31.689     23:41:02	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:31.689    23:41:02	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:31.689    23:41:02	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:31.689    23:41:02	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:31.689    23:41:02	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:31.689    23:41:02	-- scripts/common.sh@335 -- # IFS=.-:
00:07:31.689    23:41:02	-- scripts/common.sh@335 -- # read -ra ver1
00:07:31.689    23:41:02	-- scripts/common.sh@336 -- # IFS=.-:
00:07:31.689    23:41:02	-- scripts/common.sh@336 -- # read -ra ver2
00:07:31.689    23:41:02	-- scripts/common.sh@337 -- # local 'op=<'
00:07:31.689    23:41:02	-- scripts/common.sh@339 -- # ver1_l=2
00:07:31.689    23:41:02	-- scripts/common.sh@340 -- # ver2_l=1
00:07:31.689    23:41:02	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:31.689    23:41:02	-- scripts/common.sh@343 -- # case "$op" in
00:07:31.689    23:41:02	-- scripts/common.sh@344 -- # : 1
00:07:31.689    23:41:02	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:31.689    23:41:02	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:31.689     23:41:02	-- scripts/common.sh@364 -- # decimal 1
00:07:31.689     23:41:02	-- scripts/common.sh@352 -- # local d=1
00:07:31.689     23:41:02	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:31.689     23:41:02	-- scripts/common.sh@354 -- # echo 1
00:07:31.689    23:41:02	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:31.689     23:41:02	-- scripts/common.sh@365 -- # decimal 2
00:07:31.689     23:41:02	-- scripts/common.sh@352 -- # local d=2
00:07:31.689     23:41:02	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:31.689     23:41:02	-- scripts/common.sh@354 -- # echo 2
00:07:31.689    23:41:02	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:31.689    23:41:02	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:31.689    23:41:02	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:31.689    23:41:02	-- scripts/common.sh@367 -- # return 0
00:07:31.689    23:41:02	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:31.689    23:41:02	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:31.689  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.689  		--rc genhtml_branch_coverage=1
00:07:31.689  		--rc genhtml_function_coverage=1
00:07:31.689  		--rc genhtml_legend=1
00:07:31.689  		--rc geninfo_all_blocks=1
00:07:31.689  		--rc geninfo_unexecuted_blocks=1
00:07:31.689  		
00:07:31.689  		'
00:07:31.689    23:41:02	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:31.689  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.689  		--rc genhtml_branch_coverage=1
00:07:31.689  		--rc genhtml_function_coverage=1
00:07:31.689  		--rc genhtml_legend=1
00:07:31.689  		--rc geninfo_all_blocks=1
00:07:31.689  		--rc geninfo_unexecuted_blocks=1
00:07:31.689  		
00:07:31.689  		'
00:07:31.689    23:41:02	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:31.689  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.689  		--rc genhtml_branch_coverage=1
00:07:31.689  		--rc genhtml_function_coverage=1
00:07:31.689  		--rc genhtml_legend=1
00:07:31.689  		--rc geninfo_all_blocks=1
00:07:31.689  		--rc geninfo_unexecuted_blocks=1
00:07:31.689  		
00:07:31.689  		'
00:07:31.689    23:41:02	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:31.689  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:31.689  		--rc genhtml_branch_coverage=1
00:07:31.689  		--rc genhtml_function_coverage=1
00:07:31.689  		--rc genhtml_legend=1
00:07:31.689  		--rc geninfo_all_blocks=1
00:07:31.689  		--rc geninfo_unexecuted_blocks=1
00:07:31.689  		
00:07:31.689  		'
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:31.689     23:41:02	-- nvmf/common.sh@7 -- # uname -s
00:07:31.689    23:41:02	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:31.689    23:41:02	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:31.689    23:41:02	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:31.689    23:41:02	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:31.689    23:41:02	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:31.689    23:41:02	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:31.689    23:41:02	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:31.689    23:41:02	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:31.689    23:41:02	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:31.689     23:41:02	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:31.689    23:41:02	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:73fad446-ad3f-4c13-86c5-7368cde40862
00:07:31.689    23:41:02	-- nvmf/common.sh@18 -- # NVME_HOSTID=73fad446-ad3f-4c13-86c5-7368cde40862
00:07:31.689    23:41:02	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:31.689    23:41:02	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:31.689    23:41:02	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:31.689    23:41:02	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:31.689     23:41:02	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:31.689     23:41:02	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:31.689     23:41:02	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:31.689      23:41:02	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:31.689      23:41:02	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:31.689      23:41:02	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:31.689      23:41:02	-- paths/export.sh@5 -- # export PATH
00:07:31.689      23:41:02	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:07:31.689    23:41:02	-- nvmf/common.sh@46 -- # : 0
00:07:31.689    23:41:02	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:31.689    23:41:02	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:31.689    23:41:02	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:31.689    23:41:02	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:31.689    23:41:02	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:31.689    23:41:02	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:31.689    23:41:02	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:31.689    23:41:02	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='')
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024')
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@18 -- # declare -A app_params
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:31.689  INFO: launching applications...
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...'
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@24 -- # local app=target
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@25 -- # shift
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]]
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]]
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=103582
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...'
00:07:31.689  Waiting for target to run...
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:31.689   23:41:02	-- json_config/json_config_extra_key.sh@34 -- # waitforlisten 103582 /var/tmp/spdk_tgt.sock
00:07:31.689   23:41:02	-- common/autotest_common.sh@829 -- # '[' -z 103582 ']'
00:07:31.689   23:41:02	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:31.689   23:41:02	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:31.689  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:31.689   23:41:02	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:31.689   23:41:02	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:31.689   23:41:02	-- common/autotest_common.sh@10 -- # set +x
00:07:31.949  [2024-12-13 23:41:02.484914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:31.949  [2024-12-13 23:41:02.485157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103582 ]
00:07:32.516  [2024-12-13 23:41:03.032605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:32.516  [2024-12-13 23:41:03.208572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:32.516  [2024-12-13 23:41:03.208863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:33.451   23:41:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:33.451  
00:07:33.451   23:41:03	-- common/autotest_common.sh@862 -- # return 0
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@35 -- # echo ''
00:07:33.451  INFO: shutting down applications...
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...'
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@40 -- # local app=target
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]]
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@44 -- # [[ -n 103582 ]]
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 103582
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@49 -- # (( i = 0 ))
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:33.451   23:41:03	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:34.018   23:41:04	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:34.018   23:41:04	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:34.018   23:41:04	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:34.018   23:41:04	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:34.277   23:41:05	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:34.277   23:41:05	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:34.277   23:41:05	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:34.277   23:41:05	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:34.848   23:41:05	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:34.848   23:41:05	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:34.848   23:41:05	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:34.848   23:41:05	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:35.415   23:41:06	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:35.415   23:41:06	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:35.415   23:41:06	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:35.415   23:41:06	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@50 -- # kill -0 103582
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]=
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@52 -- # break
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]]
00:07:35.983  SPDK target shutdown done
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done'
00:07:35.983  Success
00:07:35.983   23:41:06	-- json_config/json_config_extra_key.sh@82 -- # echo Success
00:07:35.983  
00:07:35.983  real	0m4.284s
00:07:35.983  user	0m3.810s
00:07:35.983  sys	0m0.771s
00:07:35.983   23:41:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:35.983   23:41:06	-- common/autotest_common.sh@10 -- # set +x
00:07:35.983  ************************************
00:07:35.983  END TEST json_config_extra_key
00:07:35.983  ************************************
00:07:35.983   23:41:06	-- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:35.983   23:41:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:35.983   23:41:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:35.983   23:41:06	-- common/autotest_common.sh@10 -- # set +x
00:07:35.983  ************************************
00:07:35.983  START TEST alias_rpc
00:07:35.983  ************************************
00:07:35.983   23:41:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:35.983  * Looking for test storage...
00:07:35.983  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:07:35.983    23:41:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:35.983     23:41:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:35.983     23:41:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:36.242    23:41:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:36.242    23:41:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:36.242    23:41:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:36.242    23:41:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:36.242    23:41:06	-- scripts/common.sh@335 -- # IFS=.-:
00:07:36.242    23:41:06	-- scripts/common.sh@335 -- # read -ra ver1
00:07:36.242    23:41:06	-- scripts/common.sh@336 -- # IFS=.-:
00:07:36.242    23:41:06	-- scripts/common.sh@336 -- # read -ra ver2
00:07:36.242    23:41:06	-- scripts/common.sh@337 -- # local 'op=<'
00:07:36.242    23:41:06	-- scripts/common.sh@339 -- # ver1_l=2
00:07:36.242    23:41:06	-- scripts/common.sh@340 -- # ver2_l=1
00:07:36.242    23:41:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:36.242    23:41:06	-- scripts/common.sh@343 -- # case "$op" in
00:07:36.242    23:41:06	-- scripts/common.sh@344 -- # : 1
00:07:36.242    23:41:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:36.242    23:41:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:36.242     23:41:06	-- scripts/common.sh@364 -- # decimal 1
00:07:36.242     23:41:06	-- scripts/common.sh@352 -- # local d=1
00:07:36.242     23:41:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:36.242     23:41:06	-- scripts/common.sh@354 -- # echo 1
00:07:36.242    23:41:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:36.242     23:41:06	-- scripts/common.sh@365 -- # decimal 2
00:07:36.242     23:41:06	-- scripts/common.sh@352 -- # local d=2
00:07:36.242     23:41:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:36.242     23:41:06	-- scripts/common.sh@354 -- # echo 2
00:07:36.242    23:41:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:36.242    23:41:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:36.242    23:41:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:36.242    23:41:06	-- scripts/common.sh@367 -- # return 0
00:07:36.242    23:41:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:36.242    23:41:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:36.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.242  		--rc genhtml_branch_coverage=1
00:07:36.242  		--rc genhtml_function_coverage=1
00:07:36.242  		--rc genhtml_legend=1
00:07:36.242  		--rc geninfo_all_blocks=1
00:07:36.242  		--rc geninfo_unexecuted_blocks=1
00:07:36.242  		
00:07:36.242  		'
00:07:36.242    23:41:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:36.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.242  		--rc genhtml_branch_coverage=1
00:07:36.242  		--rc genhtml_function_coverage=1
00:07:36.242  		--rc genhtml_legend=1
00:07:36.242  		--rc geninfo_all_blocks=1
00:07:36.242  		--rc geninfo_unexecuted_blocks=1
00:07:36.242  		
00:07:36.242  		'
00:07:36.242    23:41:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:36.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.242  		--rc genhtml_branch_coverage=1
00:07:36.242  		--rc genhtml_function_coverage=1
00:07:36.242  		--rc genhtml_legend=1
00:07:36.242  		--rc geninfo_all_blocks=1
00:07:36.242  		--rc geninfo_unexecuted_blocks=1
00:07:36.242  		
00:07:36.242  		'
00:07:36.242    23:41:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:36.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:36.242  		--rc genhtml_branch_coverage=1
00:07:36.242  		--rc genhtml_function_coverage=1
00:07:36.242  		--rc genhtml_legend=1
00:07:36.242  		--rc geninfo_all_blocks=1
00:07:36.242  		--rc geninfo_unexecuted_blocks=1
00:07:36.242  		
00:07:36.242  		'
00:07:36.242   23:41:06	-- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:36.242   23:41:06	-- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103706
00:07:36.242   23:41:06	-- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103706
00:07:36.242   23:41:06	-- common/autotest_common.sh@829 -- # '[' -z 103706 ']'
00:07:36.242   23:41:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:36.242   23:41:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:36.242   23:41:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:36.242  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:36.242   23:41:06	-- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:36.242   23:41:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:36.242   23:41:06	-- common/autotest_common.sh@10 -- # set +x
00:07:36.242  [2024-12-13 23:41:06.828309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:36.243  [2024-12-13 23:41:06.828544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103706 ]
00:07:36.502  [2024-12-13 23:41:06.996808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:36.502  [2024-12-13 23:41:07.217501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:36.502  [2024-12-13 23:41:07.217776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.879   23:41:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:37.879   23:41:08	-- common/autotest_common.sh@862 -- # return 0
00:07:37.879   23:41:08	-- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:07:38.138   23:41:08	-- alias_rpc/alias_rpc.sh@19 -- # killprocess 103706
00:07:38.138   23:41:08	-- common/autotest_common.sh@936 -- # '[' -z 103706 ']'
00:07:38.138   23:41:08	-- common/autotest_common.sh@940 -- # kill -0 103706
00:07:38.138    23:41:08	-- common/autotest_common.sh@941 -- # uname
00:07:38.138   23:41:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:38.138    23:41:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103706
00:07:38.138   23:41:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:38.138   23:41:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:38.138  killing process with pid 103706
00:07:38.138   23:41:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 103706'
00:07:38.138   23:41:08	-- common/autotest_common.sh@955 -- # kill 103706
00:07:38.138   23:41:08	-- common/autotest_common.sh@960 -- # wait 103706
00:07:40.698  
00:07:40.698  real	0m4.247s
00:07:40.698  user	0m4.448s
00:07:40.698  sys	0m0.705s
00:07:40.698   23:41:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:40.698   23:41:10	-- common/autotest_common.sh@10 -- # set +x
00:07:40.698  ************************************
00:07:40.698  END TEST alias_rpc
00:07:40.698  ************************************
00:07:40.698   23:41:10	-- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]]
00:07:40.698   23:41:10	-- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:40.698   23:41:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:40.698   23:41:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:40.698   23:41:10	-- common/autotest_common.sh@10 -- # set +x
00:07:40.698  ************************************
00:07:40.698  START TEST spdkcli_tcp
00:07:40.698  ************************************
00:07:40.698   23:41:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:40.698  * Looking for test storage...
00:07:40.698  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:07:40.698    23:41:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:40.698     23:41:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:40.698     23:41:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:40.698    23:41:11	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:40.698    23:41:11	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:40.698    23:41:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:40.698    23:41:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:40.698    23:41:11	-- scripts/common.sh@335 -- # IFS=.-:
00:07:40.698    23:41:11	-- scripts/common.sh@335 -- # read -ra ver1
00:07:40.698    23:41:11	-- scripts/common.sh@336 -- # IFS=.-:
00:07:40.698    23:41:11	-- scripts/common.sh@336 -- # read -ra ver2
00:07:40.698    23:41:11	-- scripts/common.sh@337 -- # local 'op=<'
00:07:40.698    23:41:11	-- scripts/common.sh@339 -- # ver1_l=2
00:07:40.698    23:41:11	-- scripts/common.sh@340 -- # ver2_l=1
00:07:40.698    23:41:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:40.698    23:41:11	-- scripts/common.sh@343 -- # case "$op" in
00:07:40.698    23:41:11	-- scripts/common.sh@344 -- # : 1
00:07:40.698    23:41:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:40.698    23:41:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:40.698     23:41:11	-- scripts/common.sh@364 -- # decimal 1
00:07:40.698     23:41:11	-- scripts/common.sh@352 -- # local d=1
00:07:40.698     23:41:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:40.698     23:41:11	-- scripts/common.sh@354 -- # echo 1
00:07:40.698    23:41:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:40.698     23:41:11	-- scripts/common.sh@365 -- # decimal 2
00:07:40.698     23:41:11	-- scripts/common.sh@352 -- # local d=2
00:07:40.698     23:41:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:40.698     23:41:11	-- scripts/common.sh@354 -- # echo 2
00:07:40.698    23:41:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:40.698    23:41:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:40.698    23:41:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:40.698    23:41:11	-- scripts/common.sh@367 -- # return 0
00:07:40.698    23:41:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:40.698    23:41:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:40.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.698  		--rc genhtml_branch_coverage=1
00:07:40.698  		--rc genhtml_function_coverage=1
00:07:40.698  		--rc genhtml_legend=1
00:07:40.698  		--rc geninfo_all_blocks=1
00:07:40.698  		--rc geninfo_unexecuted_blocks=1
00:07:40.698  		
00:07:40.698  		'
00:07:40.698    23:41:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:40.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.698  		--rc genhtml_branch_coverage=1
00:07:40.698  		--rc genhtml_function_coverage=1
00:07:40.698  		--rc genhtml_legend=1
00:07:40.698  		--rc geninfo_all_blocks=1
00:07:40.698  		--rc geninfo_unexecuted_blocks=1
00:07:40.698  		
00:07:40.698  		'
00:07:40.698    23:41:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:40.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.698  		--rc genhtml_branch_coverage=1
00:07:40.698  		--rc genhtml_function_coverage=1
00:07:40.698  		--rc genhtml_legend=1
00:07:40.698  		--rc geninfo_all_blocks=1
00:07:40.698  		--rc geninfo_unexecuted_blocks=1
00:07:40.698  		
00:07:40.698  		'
00:07:40.698    23:41:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:40.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.698  		--rc genhtml_branch_coverage=1
00:07:40.698  		--rc genhtml_function_coverage=1
00:07:40.698  		--rc genhtml_legend=1
00:07:40.698  		--rc geninfo_all_blocks=1
00:07:40.698  		--rc geninfo_unexecuted_blocks=1
00:07:40.698  		
00:07:40.698  		'
00:07:40.698   23:41:11	-- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:07:40.699    23:41:11	-- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:07:40.699    23:41:11	-- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@19 -- # PORT=9998
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:07:40.699   23:41:11	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:40.699   23:41:11	-- common/autotest_common.sh@10 -- # set +x
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=103827
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@27 -- # waitforlisten 103827
00:07:40.699   23:41:11	-- common/autotest_common.sh@829 -- # '[' -z 103827 ']'
00:07:40.699   23:41:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:40.699   23:41:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:40.699   23:41:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:40.699   23:41:11	-- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:07:40.699  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:40.699   23:41:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:40.699   23:41:11	-- common/autotest_common.sh@10 -- # set +x
00:07:40.699  [2024-12-13 23:41:11.130524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:40.699  [2024-12-13 23:41:11.130987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103827 ]
00:07:40.699  [2024-12-13 23:41:11.305545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:40.993  [2024-12-13 23:41:11.590831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:40.993  [2024-12-13 23:41:11.591165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:40.993  [2024-12-13 23:41:11.591181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:42.368   23:41:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:42.368   23:41:12	-- common/autotest_common.sh@862 -- # return 0
00:07:42.368   23:41:12	-- spdkcli/tcp.sh@31 -- # socat_pid=103863
00:07:42.368   23:41:12	-- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:07:42.368   23:41:12	-- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:07:42.368  [
00:07:42.368    "spdk_get_version",
00:07:42.368    "rpc_get_methods",
00:07:42.368    "trace_get_info",
00:07:42.368    "trace_get_tpoint_group_mask",
00:07:42.368    "trace_disable_tpoint_group",
00:07:42.368    "trace_enable_tpoint_group",
00:07:42.368    "trace_clear_tpoint_mask",
00:07:42.368    "trace_set_tpoint_mask",
00:07:42.368    "framework_get_pci_devices",
00:07:42.368    "framework_get_config",
00:07:42.368    "framework_get_subsystems",
00:07:42.368    "iobuf_get_stats",
00:07:42.368    "iobuf_set_options",
00:07:42.368    "sock_set_default_impl",
00:07:42.368    "sock_impl_set_options",
00:07:42.368    "sock_impl_get_options",
00:07:42.368    "vmd_rescan",
00:07:42.368    "vmd_remove_device",
00:07:42.368    "vmd_enable",
00:07:42.368    "accel_get_stats",
00:07:42.368    "accel_set_options",
00:07:42.368    "accel_set_driver",
00:07:42.368    "accel_crypto_key_destroy",
00:07:42.368    "accel_crypto_keys_get",
00:07:42.368    "accel_crypto_key_create",
00:07:42.368    "accel_assign_opc",
00:07:42.368    "accel_get_module_info",
00:07:42.368    "accel_get_opc_assignments",
00:07:42.368    "notify_get_notifications",
00:07:42.368    "notify_get_types",
00:07:42.368    "bdev_get_histogram",
00:07:42.368    "bdev_enable_histogram",
00:07:42.368    "bdev_set_qos_limit",
00:07:42.368    "bdev_set_qd_sampling_period",
00:07:42.368    "bdev_get_bdevs",
00:07:42.368    "bdev_reset_iostat",
00:07:42.368    "bdev_get_iostat",
00:07:42.368    "bdev_examine",
00:07:42.368    "bdev_wait_for_examine",
00:07:42.368    "bdev_set_options",
00:07:42.368    "scsi_get_devices",
00:07:42.368    "thread_set_cpumask",
00:07:42.368    "framework_get_scheduler",
00:07:42.368    "framework_set_scheduler",
00:07:42.368    "framework_get_reactors",
00:07:42.368    "thread_get_io_channels",
00:07:42.368    "thread_get_pollers",
00:07:42.368    "thread_get_stats",
00:07:42.368    "framework_monitor_context_switch",
00:07:42.368    "spdk_kill_instance",
00:07:42.368    "log_enable_timestamps",
00:07:42.368    "log_get_flags",
00:07:42.368    "log_clear_flag",
00:07:42.368    "log_set_flag",
00:07:42.368    "log_get_level",
00:07:42.368    "log_set_level",
00:07:42.368    "log_get_print_level",
00:07:42.368    "log_set_print_level",
00:07:42.368    "framework_enable_cpumask_locks",
00:07:42.368    "framework_disable_cpumask_locks",
00:07:42.368    "framework_wait_init",
00:07:42.368    "framework_start_init",
00:07:42.368    "virtio_blk_create_transport",
00:07:42.368    "virtio_blk_get_transports",
00:07:42.368    "vhost_controller_set_coalescing",
00:07:42.368    "vhost_get_controllers",
00:07:42.368    "vhost_delete_controller",
00:07:42.368    "vhost_create_blk_controller",
00:07:42.368    "vhost_scsi_controller_remove_target",
00:07:42.368    "vhost_scsi_controller_add_target",
00:07:42.368    "vhost_start_scsi_controller",
00:07:42.368    "vhost_create_scsi_controller",
00:07:42.368    "nbd_get_disks",
00:07:42.368    "nbd_stop_disk",
00:07:42.368    "nbd_start_disk",
00:07:42.368    "env_dpdk_get_mem_stats",
00:07:42.368    "nvmf_subsystem_get_listeners",
00:07:42.368    "nvmf_subsystem_get_qpairs",
00:07:42.368    "nvmf_subsystem_get_controllers",
00:07:42.368    "nvmf_get_stats",
00:07:42.368    "nvmf_get_transports",
00:07:42.368    "nvmf_create_transport",
00:07:42.368    "nvmf_get_targets",
00:07:42.368    "nvmf_delete_target",
00:07:42.368    "nvmf_create_target",
00:07:42.368    "nvmf_subsystem_allow_any_host",
00:07:42.368    "nvmf_subsystem_remove_host",
00:07:42.368    "nvmf_subsystem_add_host",
00:07:42.368    "nvmf_subsystem_remove_ns",
00:07:42.368    "nvmf_subsystem_add_ns",
00:07:42.368    "nvmf_subsystem_listener_set_ana_state",
00:07:42.368    "nvmf_discovery_get_referrals",
00:07:42.368    "nvmf_discovery_remove_referral",
00:07:42.368    "nvmf_discovery_add_referral",
00:07:42.368    "nvmf_subsystem_remove_listener",
00:07:42.368    "nvmf_subsystem_add_listener",
00:07:42.368    "nvmf_delete_subsystem",
00:07:42.368    "nvmf_create_subsystem",
00:07:42.368    "nvmf_get_subsystems",
00:07:42.368    "nvmf_set_crdt",
00:07:42.368    "nvmf_set_config",
00:07:42.368    "nvmf_set_max_subsystems",
00:07:42.368    "iscsi_set_options",
00:07:42.368    "iscsi_get_auth_groups",
00:07:42.368    "iscsi_auth_group_remove_secret",
00:07:42.368    "iscsi_auth_group_add_secret",
00:07:42.368    "iscsi_delete_auth_group",
00:07:42.368    "iscsi_create_auth_group",
00:07:42.368    "iscsi_set_discovery_auth",
00:07:42.368    "iscsi_get_options",
00:07:42.368    "iscsi_target_node_request_logout",
00:07:42.368    "iscsi_target_node_set_redirect",
00:07:42.368    "iscsi_target_node_set_auth",
00:07:42.368    "iscsi_target_node_add_lun",
00:07:42.368    "iscsi_get_connections",
00:07:42.368    "iscsi_portal_group_set_auth",
00:07:42.368    "iscsi_start_portal_group",
00:07:42.368    "iscsi_delete_portal_group",
00:07:42.368    "iscsi_create_portal_group",
00:07:42.368    "iscsi_get_portal_groups",
00:07:42.368    "iscsi_delete_target_node",
00:07:42.368    "iscsi_target_node_remove_pg_ig_maps",
00:07:42.368    "iscsi_target_node_add_pg_ig_maps",
00:07:42.368    "iscsi_create_target_node",
00:07:42.368    "iscsi_get_target_nodes",
00:07:42.368    "iscsi_delete_initiator_group",
00:07:42.368    "iscsi_initiator_group_remove_initiators",
00:07:42.368    "iscsi_initiator_group_add_initiators",
00:07:42.368    "iscsi_create_initiator_group",
00:07:42.368    "iscsi_get_initiator_groups",
00:07:42.368    "iaa_scan_accel_module",
00:07:42.368    "dsa_scan_accel_module",
00:07:42.368    "ioat_scan_accel_module",
00:07:42.368    "accel_error_inject_error",
00:07:42.368    "bdev_iscsi_delete",
00:07:42.368    "bdev_iscsi_create",
00:07:42.368    "bdev_iscsi_set_options",
00:07:42.368    "bdev_virtio_attach_controller",
00:07:42.368    "bdev_virtio_scsi_get_devices",
00:07:42.368    "bdev_virtio_detach_controller",
00:07:42.368    "bdev_virtio_blk_set_hotplug",
00:07:42.368    "bdev_ftl_set_property",
00:07:42.368    "bdev_ftl_get_properties",
00:07:42.368    "bdev_ftl_get_stats",
00:07:42.368    "bdev_ftl_unmap",
00:07:42.368    "bdev_ftl_unload",
00:07:42.368    "bdev_ftl_delete",
00:07:42.368    "bdev_ftl_load",
00:07:42.368    "bdev_ftl_create",
00:07:42.368    "bdev_aio_delete",
00:07:42.368    "bdev_aio_rescan",
00:07:42.368    "bdev_aio_create",
00:07:42.368    "blobfs_create",
00:07:42.368    "blobfs_detect",
00:07:42.368    "blobfs_set_cache_size",
00:07:42.368    "bdev_zone_block_delete",
00:07:42.368    "bdev_zone_block_create",
00:07:42.368    "bdev_delay_delete",
00:07:42.368    "bdev_delay_create",
00:07:42.368    "bdev_delay_update_latency",
00:07:42.368    "bdev_split_delete",
00:07:42.368    "bdev_split_create",
00:07:42.368    "bdev_error_inject_error",
00:07:42.368    "bdev_error_delete",
00:07:42.368    "bdev_error_create",
00:07:42.368    "bdev_raid_set_options",
00:07:42.368    "bdev_raid_remove_base_bdev",
00:07:42.368    "bdev_raid_add_base_bdev",
00:07:42.368    "bdev_raid_delete",
00:07:42.368    "bdev_raid_create",
00:07:42.368    "bdev_raid_get_bdevs",
00:07:42.368    "bdev_lvol_grow_lvstore",
00:07:42.368    "bdev_lvol_get_lvols",
00:07:42.368    "bdev_lvol_get_lvstores",
00:07:42.368    "bdev_lvol_delete",
00:07:42.368    "bdev_lvol_set_read_only",
00:07:42.368    "bdev_lvol_resize",
00:07:42.368    "bdev_lvol_decouple_parent",
00:07:42.368    "bdev_lvol_inflate",
00:07:42.368    "bdev_lvol_rename",
00:07:42.368    "bdev_lvol_clone_bdev",
00:07:42.368    "bdev_lvol_clone",
00:07:42.368    "bdev_lvol_snapshot",
00:07:42.368    "bdev_lvol_create",
00:07:42.368    "bdev_lvol_delete_lvstore",
00:07:42.368    "bdev_lvol_rename_lvstore",
00:07:42.368    "bdev_lvol_create_lvstore",
00:07:42.368    "bdev_passthru_delete",
00:07:42.368    "bdev_passthru_create",
00:07:42.368    "bdev_nvme_cuse_unregister",
00:07:42.368    "bdev_nvme_cuse_register",
00:07:42.368    "bdev_opal_new_user",
00:07:42.368    "bdev_opal_set_lock_state",
00:07:42.368    "bdev_opal_delete",
00:07:42.368    "bdev_opal_get_info",
00:07:42.368    "bdev_opal_create",
00:07:42.368    "bdev_nvme_opal_revert",
00:07:42.368    "bdev_nvme_opal_init",
00:07:42.368    "bdev_nvme_send_cmd",
00:07:42.368    "bdev_nvme_get_path_iostat",
00:07:42.368    "bdev_nvme_get_mdns_discovery_info",
00:07:42.368    "bdev_nvme_stop_mdns_discovery",
00:07:42.368    "bdev_nvme_start_mdns_discovery",
00:07:42.368    "bdev_nvme_set_multipath_policy",
00:07:42.368    "bdev_nvme_set_preferred_path",
00:07:42.368    "bdev_nvme_get_io_paths",
00:07:42.369    "bdev_nvme_remove_error_injection",
00:07:42.369    "bdev_nvme_add_error_injection",
00:07:42.369    "bdev_nvme_get_discovery_info",
00:07:42.369    "bdev_nvme_stop_discovery",
00:07:42.369    "bdev_nvme_start_discovery",
00:07:42.369    "bdev_nvme_get_controller_health_info",
00:07:42.369    "bdev_nvme_disable_controller",
00:07:42.369    "bdev_nvme_enable_controller",
00:07:42.369    "bdev_nvme_reset_controller",
00:07:42.369    "bdev_nvme_get_transport_statistics",
00:07:42.369    "bdev_nvme_apply_firmware",
00:07:42.369    "bdev_nvme_detach_controller",
00:07:42.369    "bdev_nvme_get_controllers",
00:07:42.369    "bdev_nvme_attach_controller",
00:07:42.369    "bdev_nvme_set_hotplug",
00:07:42.369    "bdev_nvme_set_options",
00:07:42.369    "bdev_null_resize",
00:07:42.369    "bdev_null_delete",
00:07:42.369    "bdev_null_create",
00:07:42.369    "bdev_malloc_delete",
00:07:42.369    "bdev_malloc_create"
00:07:42.369  ]
00:07:42.369   23:41:13	-- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:07:42.369   23:41:13	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:42.369   23:41:13	-- common/autotest_common.sh@10 -- # set +x
00:07:42.627   23:41:13	-- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:07:42.627   23:41:13	-- spdkcli/tcp.sh@38 -- # killprocess 103827
00:07:42.627   23:41:13	-- common/autotest_common.sh@936 -- # '[' -z 103827 ']'
00:07:42.627   23:41:13	-- common/autotest_common.sh@940 -- # kill -0 103827
00:07:42.627    23:41:13	-- common/autotest_common.sh@941 -- # uname
00:07:42.627   23:41:13	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:42.627    23:41:13	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103827
00:07:42.627   23:41:13	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:42.627   23:41:13	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:42.627   23:41:13	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 103827'
00:07:42.627  killing process with pid 103827
00:07:42.627   23:41:13	-- common/autotest_common.sh@955 -- # kill 103827
00:07:42.627   23:41:13	-- common/autotest_common.sh@960 -- # wait 103827
00:07:44.531  
00:07:44.531  real	0m4.242s
00:07:44.531  user	0m7.659s
00:07:44.531  sys	0m0.712s
00:07:44.531   23:41:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:44.531   23:41:15	-- common/autotest_common.sh@10 -- # set +x
00:07:44.531  ************************************
00:07:44.531  END TEST spdkcli_tcp
00:07:44.531  ************************************
00:07:44.531   23:41:15	-- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:44.531   23:41:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:44.531   23:41:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:44.531   23:41:15	-- common/autotest_common.sh@10 -- # set +x
00:07:44.531  ************************************
00:07:44.531  START TEST dpdk_mem_utility
00:07:44.531  ************************************
00:07:44.531   23:41:15	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:44.531  * Looking for test storage...
00:07:44.531  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:07:44.531    23:41:15	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:44.531     23:41:15	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:44.531     23:41:15	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:44.789    23:41:15	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:44.789    23:41:15	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:44.789    23:41:15	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:44.789    23:41:15	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:44.789    23:41:15	-- scripts/common.sh@335 -- # IFS=.-:
00:07:44.789    23:41:15	-- scripts/common.sh@335 -- # read -ra ver1
00:07:44.789    23:41:15	-- scripts/common.sh@336 -- # IFS=.-:
00:07:44.789    23:41:15	-- scripts/common.sh@336 -- # read -ra ver2
00:07:44.789    23:41:15	-- scripts/common.sh@337 -- # local 'op=<'
00:07:44.789    23:41:15	-- scripts/common.sh@339 -- # ver1_l=2
00:07:44.789    23:41:15	-- scripts/common.sh@340 -- # ver2_l=1
00:07:44.789    23:41:15	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:44.789    23:41:15	-- scripts/common.sh@343 -- # case "$op" in
00:07:44.789    23:41:15	-- scripts/common.sh@344 -- # : 1
00:07:44.789    23:41:15	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:44.789    23:41:15	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:44.789     23:41:15	-- scripts/common.sh@364 -- # decimal 1
00:07:44.789     23:41:15	-- scripts/common.sh@352 -- # local d=1
00:07:44.789     23:41:15	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:44.789     23:41:15	-- scripts/common.sh@354 -- # echo 1
00:07:44.789    23:41:15	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:44.789     23:41:15	-- scripts/common.sh@365 -- # decimal 2
00:07:44.789     23:41:15	-- scripts/common.sh@352 -- # local d=2
00:07:44.789     23:41:15	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:44.789     23:41:15	-- scripts/common.sh@354 -- # echo 2
00:07:44.789    23:41:15	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:44.789    23:41:15	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:44.789    23:41:15	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:44.789    23:41:15	-- scripts/common.sh@367 -- # return 0
00:07:44.789    23:41:15	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:44.789    23:41:15	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:44.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.789  		--rc genhtml_branch_coverage=1
00:07:44.789  		--rc genhtml_function_coverage=1
00:07:44.789  		--rc genhtml_legend=1
00:07:44.789  		--rc geninfo_all_blocks=1
00:07:44.789  		--rc geninfo_unexecuted_blocks=1
00:07:44.789  		
00:07:44.789  		'
00:07:44.789    23:41:15	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:44.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.789  		--rc genhtml_branch_coverage=1
00:07:44.789  		--rc genhtml_function_coverage=1
00:07:44.789  		--rc genhtml_legend=1
00:07:44.789  		--rc geninfo_all_blocks=1
00:07:44.789  		--rc geninfo_unexecuted_blocks=1
00:07:44.789  		
00:07:44.789  		'
00:07:44.789    23:41:15	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:44.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.789  		--rc genhtml_branch_coverage=1
00:07:44.789  		--rc genhtml_function_coverage=1
00:07:44.789  		--rc genhtml_legend=1
00:07:44.789  		--rc geninfo_all_blocks=1
00:07:44.789  		--rc geninfo_unexecuted_blocks=1
00:07:44.789  		
00:07:44.789  		'
00:07:44.789    23:41:15	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:44.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:44.789  		--rc genhtml_branch_coverage=1
00:07:44.789  		--rc genhtml_function_coverage=1
00:07:44.789  		--rc genhtml_legend=1
00:07:44.789  		--rc geninfo_all_blocks=1
00:07:44.789  		--rc geninfo_unexecuted_blocks=1
00:07:44.789  		
00:07:44.789  		'
00:07:44.789   23:41:15	-- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:44.789   23:41:15	-- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=103961
00:07:44.789   23:41:15	-- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 103961
00:07:44.789   23:41:15	-- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:44.789   23:41:15	-- common/autotest_common.sh@829 -- # '[' -z 103961 ']'
00:07:44.789   23:41:15	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:44.789   23:41:15	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:44.789  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:44.789   23:41:15	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:44.789   23:41:15	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:44.789   23:41:15	-- common/autotest_common.sh@10 -- # set +x
00:07:44.789  [2024-12-13 23:41:15.400246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:44.789  [2024-12-13 23:41:15.400490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103961 ]
00:07:45.047  [2024-12-13 23:41:15.566518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:45.047  [2024-12-13 23:41:15.764082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:45.047  [2024-12-13 23:41:15.764335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.424   23:41:16	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:46.424   23:41:16	-- common/autotest_common.sh@862 -- # return 0
00:07:46.424   23:41:16	-- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:46.424   23:41:16	-- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:46.424   23:41:16	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:46.424   23:41:16	-- common/autotest_common.sh@10 -- # set +x
00:07:46.424  {
00:07:46.424  "filename": "/tmp/spdk_mem_dump.txt"
00:07:46.424  }
00:07:46.424   23:41:16	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:46.424   23:41:16	-- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:46.424  DPDK memory size 820.000000 MiB in 1 heap(s)
00:07:46.424  1 heaps totaling size 820.000000 MiB
00:07:46.424    size:  820.000000 MiB heap id: 0
00:07:46.424  end heaps----------
00:07:46.424  8 mempools totaling size 598.116089 MiB
00:07:46.424    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:46.424    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:46.424    size:   84.521057 MiB name: bdev_io_103961
00:07:46.424    size:   51.011292 MiB name: evtpool_103961
00:07:46.424    size:   50.003479 MiB name: msgpool_103961
00:07:46.424    size:   21.763794 MiB name: PDU_Pool
00:07:46.424    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:46.424    size:    0.026123 MiB name: Session_Pool
00:07:46.424  end mempools-------
00:07:46.424  6 memzones totaling size 4.142822 MiB
00:07:46.425    size:    1.000366 MiB name: RG_ring_0_103961
00:07:46.425    size:    1.000366 MiB name: RG_ring_1_103961
00:07:46.425    size:    1.000366 MiB name: RG_ring_4_103961
00:07:46.425    size:    1.000366 MiB name: RG_ring_5_103961
00:07:46.425    size:    0.125366 MiB name: RG_ring_2_103961
00:07:46.425    size:    0.015991 MiB name: RG_ring_3_103961
00:07:46.425  end memzones-------
00:07:46.425   23:41:17	-- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:07:46.425  heap id: 0 total size: 820.000000 MiB number of busy elements: 220 number of free elements: 18
00:07:46.425    list of free elements. size: 18.471191 MiB
00:07:46.425      element at address: 0x200000400000 with size:    1.999451 MiB
00:07:46.425      element at address: 0x200000800000 with size:    1.996887 MiB
00:07:46.425      element at address: 0x200007000000 with size:    1.995972 MiB
00:07:46.425      element at address: 0x20000b200000 with size:    1.995972 MiB
00:07:46.425      element at address: 0x200019100040 with size:    0.999939 MiB
00:07:46.425      element at address: 0x200019500040 with size:    0.999939 MiB
00:07:46.425      element at address: 0x200019600000 with size:    0.999329 MiB
00:07:46.425      element at address: 0x200003e00000 with size:    0.996094 MiB
00:07:46.425      element at address: 0x200032200000 with size:    0.994324 MiB
00:07:46.425      element at address: 0x200018e00000 with size:    0.959656 MiB
00:07:46.425      element at address: 0x200019900040 with size:    0.937256 MiB
00:07:46.425      element at address: 0x200000200000 with size:    0.835083 MiB
00:07:46.425      element at address: 0x20001b000000 with size:    0.562439 MiB
00:07:46.425      element at address: 0x200019200000 with size:    0.489197 MiB
00:07:46.425      element at address: 0x200019a00000 with size:    0.485413 MiB
00:07:46.425      element at address: 0x200013800000 with size:    0.468140 MiB
00:07:46.425      element at address: 0x200028400000 with size:    0.399963 MiB
00:07:46.425      element at address: 0x200003a00000 with size:    0.356140 MiB
00:07:46.425    list of standard malloc elements. size: 199.264404 MiB
00:07:46.425      element at address: 0x20000b3fef80 with size:  132.000183 MiB
00:07:46.425      element at address: 0x2000071fef80 with size:   64.000183 MiB
00:07:46.425      element at address: 0x200018ffff80 with size:    1.000183 MiB
00:07:46.425      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:07:46.425      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:07:46.425      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:07:46.425      element at address: 0x2000199eff40 with size:    0.062683 MiB
00:07:46.425      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:07:46.425      element at address: 0x20000b1ff380 with size:    0.000366 MiB
00:07:46.425      element at address: 0x20000b1ff040 with size:    0.000305 MiB
00:07:46.425      element at address: 0x2000137ff040 with size:    0.000305 MiB
00:07:46.425      element at address: 0x2000002d5c80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d5d80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d5e80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d5f80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6080 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6180 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6280 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6380 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6480 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6580 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6680 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6780 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6880 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6980 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6a80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6d00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6e00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d6f00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7000 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7100 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7200 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7300 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7400 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7500 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7600 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7700 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7800 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7900 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7a00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200003aff980 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200003affa80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200003eff000 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff180 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff280 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff500 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff600 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff700 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff800 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ff900 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ffa00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ffb00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ffc00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ffd00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1ffe00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20000b1fff00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff180 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff280 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff380 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff480 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff580 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff680 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff780 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff880 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ff980 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ffa80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ffb80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137ffc80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000137fff00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013877d80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013877e80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013877f80 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878080 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878180 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878280 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878380 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878480 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200013878580 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000138f88c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200018efdd00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d3c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d4c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d5c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d6c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d7c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d8c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001927d9c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x2000192fdd00 with size:    0.000244 MiB
00:07:46.425      element at address: 0x200019abc680 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b08ffc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0900c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0901c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0902c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0903c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0904c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0905c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0906c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0907c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0908c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0909c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090ac0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090bc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090cc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090dc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090ec0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b090fc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0910c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0911c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0912c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0913c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0914c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0915c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0916c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0917c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0918c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0919c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091ac0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091bc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091cc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091dc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091ec0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b091fc0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0920c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0921c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0922c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0923c0 with size:    0.000244 MiB
00:07:46.425      element at address: 0x20001b0924c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0925c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0926c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0927c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0928c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0929c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092ac0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092bc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092cc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092dc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092ec0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b092fc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0930c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0931c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0932c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0933c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0934c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0935c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0936c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0937c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0938c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0939c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093ac0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093bc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093cc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093dc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093ec0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b093fc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0940c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0941c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0942c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0943c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0944c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0945c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0946c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0947c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0948c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0949c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094ac0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094bc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094cc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094dc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094ec0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b094fc0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0950c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0951c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0952c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20001b0953c0 with size:    0.000244 MiB
00:07:46.426      element at address: 0x200028466640 with size:    0.000244 MiB
00:07:46.426      element at address: 0x200028466740 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846d400 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846d680 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846d780 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846d880 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846d980 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846da80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846db80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846dc80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846dd80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846de80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846df80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e080 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e180 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e280 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e380 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e480 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e580 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e680 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e780 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e880 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846e980 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846ea80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846eb80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846ec80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846ed80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846ee80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846ef80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f080 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f180 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f280 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f380 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f480 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f580 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f680 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f780 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f880 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846f980 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846fa80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846fb80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846fc80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846fd80 with size:    0.000244 MiB
00:07:46.426      element at address: 0x20002846fe80 with size:    0.000244 MiB
00:07:46.426    list of memzone associated elements. size: 602.264404 MiB
00:07:46.426      element at address: 0x20001b0954c0 with size:  211.416809 MiB
00:07:46.426        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:46.426      element at address: 0x20002846ff80 with size:  157.562622 MiB
00:07:46.426        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:46.426      element at address: 0x2000139fab40 with size:   84.020691 MiB
00:07:46.426        associated memzone info: size:   84.020508 MiB name: MP_bdev_io_103961_0
00:07:46.426      element at address: 0x2000009ff340 with size:   48.003113 MiB
00:07:46.426        associated memzone info: size:   48.002930 MiB name: MP_evtpool_103961_0
00:07:46.426      element at address: 0x200003fff340 with size:   48.003113 MiB
00:07:46.426        associated memzone info: size:   48.002930 MiB name: MP_msgpool_103961_0
00:07:46.426      element at address: 0x200019bbe900 with size:   20.255615 MiB
00:07:46.426        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:46.426      element at address: 0x2000323feb00 with size:   18.005127 MiB
00:07:46.426        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:46.426      element at address: 0x2000005ffdc0 with size:    2.000549 MiB
00:07:46.426        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_103961
00:07:46.426      element at address: 0x200003bffdc0 with size:    2.000549 MiB
00:07:46.426        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_103961
00:07:46.426      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:07:46.426        associated memzone info: size:    1.007996 MiB name: MP_evtpool_103961
00:07:46.426      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:07:46.426        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:46.426      element at address: 0x200019abc780 with size:    1.008179 MiB
00:07:46.426        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:46.426      element at address: 0x200018efde00 with size:    1.008179 MiB
00:07:46.426        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:46.426      element at address: 0x2000138f89c0 with size:    1.008179 MiB
00:07:46.426        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:46.426      element at address: 0x200003eff100 with size:    1.000549 MiB
00:07:46.426        associated memzone info: size:    1.000366 MiB name: RG_ring_0_103961
00:07:46.426      element at address: 0x200003affb80 with size:    1.000549 MiB
00:07:46.426        associated memzone info: size:    1.000366 MiB name: RG_ring_1_103961
00:07:46.426      element at address: 0x2000196ffd40 with size:    1.000549 MiB
00:07:46.426        associated memzone info: size:    1.000366 MiB name: RG_ring_4_103961
00:07:46.426      element at address: 0x2000322fe8c0 with size:    1.000549 MiB
00:07:46.426        associated memzone info: size:    1.000366 MiB name: RG_ring_5_103961
00:07:46.426      element at address: 0x200003a5b2c0 with size:    0.500549 MiB
00:07:46.426        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_103961
00:07:46.426      element at address: 0x20001927dac0 with size:    0.500549 MiB
00:07:46.426        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:46.426      element at address: 0x200013878680 with size:    0.500549 MiB
00:07:46.426        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:46.426      element at address: 0x200019a7c440 with size:    0.250549 MiB
00:07:46.426        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:46.426      element at address: 0x200003adf740 with size:    0.125549 MiB
00:07:46.426        associated memzone info: size:    0.125366 MiB name: RG_ring_2_103961
00:07:46.426      element at address: 0x200018ef5ac0 with size:    0.031799 MiB
00:07:46.426        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:46.426      element at address: 0x200028466840 with size:    0.023804 MiB
00:07:46.426        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:46.426      element at address: 0x200003adb500 with size:    0.016174 MiB
00:07:46.426        associated memzone info: size:    0.015991 MiB name: RG_ring_3_103961
00:07:46.426      element at address: 0x20002846c9c0 with size:    0.002502 MiB
00:07:46.426        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:46.426      element at address: 0x2000002d6b80 with size:    0.000366 MiB
00:07:46.426        associated memzone info: size:    0.000183 MiB name: MP_msgpool_103961
00:07:46.426      element at address: 0x2000137ffd80 with size:    0.000366 MiB
00:07:46.426        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_103961
00:07:46.426      element at address: 0x20002846d500 with size:    0.000366 MiB
00:07:46.426        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:46.426   23:41:17	-- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:46.426   23:41:17	-- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 103961
00:07:46.426   23:41:17	-- common/autotest_common.sh@936 -- # '[' -z 103961 ']'
00:07:46.427   23:41:17	-- common/autotest_common.sh@940 -- # kill -0 103961
00:07:46.427    23:41:17	-- common/autotest_common.sh@941 -- # uname
00:07:46.427   23:41:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:46.427    23:41:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103961
00:07:46.427   23:41:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:46.427   23:41:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:46.427  killing process with pid 103961
00:07:46.427   23:41:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 103961'
00:07:46.427   23:41:17	-- common/autotest_common.sh@955 -- # kill 103961
00:07:46.427   23:41:17	-- common/autotest_common.sh@960 -- # wait 103961
00:07:48.962  
00:07:48.962  real	0m3.903s
00:07:48.962  user	0m3.948s
00:07:48.962  sys	0m0.634s
00:07:48.962   23:41:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.962   23:41:19	-- common/autotest_common.sh@10 -- # set +x
00:07:48.962  ************************************
00:07:48.962  END TEST dpdk_mem_utility
00:07:48.962  ************************************
00:07:48.962   23:41:19	-- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:48.962   23:41:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.962   23:41:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.962   23:41:19	-- common/autotest_common.sh@10 -- # set +x
00:07:48.962  ************************************
00:07:48.962  START TEST event
00:07:48.962  ************************************
00:07:48.962   23:41:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:48.962  * Looking for test storage...
00:07:48.962  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:48.962    23:41:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:48.962     23:41:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:48.962     23:41:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:48.962    23:41:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:48.962    23:41:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:48.962    23:41:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:48.962    23:41:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:48.962    23:41:19	-- scripts/common.sh@335 -- # IFS=.-:
00:07:48.962    23:41:19	-- scripts/common.sh@335 -- # read -ra ver1
00:07:48.962    23:41:19	-- scripts/common.sh@336 -- # IFS=.-:
00:07:48.962    23:41:19	-- scripts/common.sh@336 -- # read -ra ver2
00:07:48.962    23:41:19	-- scripts/common.sh@337 -- # local 'op=<'
00:07:48.962    23:41:19	-- scripts/common.sh@339 -- # ver1_l=2
00:07:48.962    23:41:19	-- scripts/common.sh@340 -- # ver2_l=1
00:07:48.962    23:41:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:48.962    23:41:19	-- scripts/common.sh@343 -- # case "$op" in
00:07:48.962    23:41:19	-- scripts/common.sh@344 -- # : 1
00:07:48.962    23:41:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:48.962    23:41:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:48.962     23:41:19	-- scripts/common.sh@364 -- # decimal 1
00:07:48.962     23:41:19	-- scripts/common.sh@352 -- # local d=1
00:07:48.962     23:41:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:48.962     23:41:19	-- scripts/common.sh@354 -- # echo 1
00:07:48.962    23:41:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:48.962     23:41:19	-- scripts/common.sh@365 -- # decimal 2
00:07:48.962     23:41:19	-- scripts/common.sh@352 -- # local d=2
00:07:48.962     23:41:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:48.962     23:41:19	-- scripts/common.sh@354 -- # echo 2
00:07:48.962    23:41:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:48.962    23:41:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:48.962    23:41:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:48.962    23:41:19	-- scripts/common.sh@367 -- # return 0
00:07:48.962    23:41:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:48.962    23:41:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:48.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:48.962  		--rc genhtml_branch_coverage=1
00:07:48.962  		--rc genhtml_function_coverage=1
00:07:48.962  		--rc genhtml_legend=1
00:07:48.962  		--rc geninfo_all_blocks=1
00:07:48.962  		--rc geninfo_unexecuted_blocks=1
00:07:48.962  		
00:07:48.962  		'
00:07:48.962    23:41:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:48.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:48.962  		--rc genhtml_branch_coverage=1
00:07:48.962  		--rc genhtml_function_coverage=1
00:07:48.962  		--rc genhtml_legend=1
00:07:48.962  		--rc geninfo_all_blocks=1
00:07:48.962  		--rc geninfo_unexecuted_blocks=1
00:07:48.962  		
00:07:48.962  		'
00:07:48.962    23:41:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:48.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:48.962  		--rc genhtml_branch_coverage=1
00:07:48.962  		--rc genhtml_function_coverage=1
00:07:48.962  		--rc genhtml_legend=1
00:07:48.962  		--rc geninfo_all_blocks=1
00:07:48.962  		--rc geninfo_unexecuted_blocks=1
00:07:48.962  		
00:07:48.962  		'
00:07:48.962    23:41:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:48.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:48.962  		--rc genhtml_branch_coverage=1
00:07:48.962  		--rc genhtml_function_coverage=1
00:07:48.962  		--rc genhtml_legend=1
00:07:48.962  		--rc geninfo_all_blocks=1
00:07:48.962  		--rc geninfo_unexecuted_blocks=1
00:07:48.962  		
00:07:48.962  		'
00:07:48.962   23:41:19	-- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:07:48.962    23:41:19	-- bdev/nbd_common.sh@6 -- # set -e
00:07:48.962   23:41:19	-- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:48.962   23:41:19	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:07:48.962   23:41:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.962   23:41:19	-- common/autotest_common.sh@10 -- # set +x
00:07:48.962  ************************************
00:07:48.962  START TEST event_perf
00:07:48.962  ************************************
00:07:48.962   23:41:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:48.962  Running I/O for 1 seconds...[2024-12-13 23:41:19.345366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:48.962  [2024-12-13 23:41:19.345550] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104087 ]
00:07:48.962  [2024-12-13 23:41:19.531651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:49.221  [2024-12-13 23:41:19.717519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:49.221  [2024-12-13 23:41:19.717674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:49.221  [2024-12-13 23:41:19.717801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:49.221  [2024-12-13 23:41:19.717800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:50.603  Running I/O for 1 seconds...
00:07:50.603  lcore  0:   197767
00:07:50.603  lcore  1:   197768
00:07:50.603  lcore  2:   197767
00:07:50.603  lcore  3:   197767
00:07:50.603  done.
00:07:50.603  
00:07:50.603  real	0m1.821s
00:07:50.603  user	0m4.586s
00:07:50.603  sys	0m0.140s
00:07:50.603   23:41:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:50.603   23:41:21	-- common/autotest_common.sh@10 -- # set +x
00:07:50.603  ************************************
00:07:50.603  END TEST event_perf
00:07:50.603  ************************************
00:07:50.603   23:41:21	-- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:50.603   23:41:21	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:50.603   23:41:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:50.603   23:41:21	-- common/autotest_common.sh@10 -- # set +x
00:07:50.603  ************************************
00:07:50.603  START TEST event_reactor
00:07:50.603  ************************************
00:07:50.603   23:41:21	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:50.603  [2024-12-13 23:41:21.210186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:50.603  [2024-12-13 23:41:21.210446] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104135 ]
00:07:50.861  [2024-12-13 23:41:21.386708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:51.119  [2024-12-13 23:41:21.676781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:52.493  test_start
00:07:52.493  oneshot
00:07:52.493  tick 100
00:07:52.493  tick 100
00:07:52.493  tick 250
00:07:52.493  tick 100
00:07:52.493  tick 100
00:07:52.493  tick 100
00:07:52.493  tick 250
00:07:52.493  tick 500
00:07:52.493  tick 100
00:07:52.493  tick 100
00:07:52.493  tick 250
00:07:52.493  tick 100
00:07:52.493  tick 100
00:07:52.493  test_end
00:07:52.493  
00:07:52.493  real	0m1.867s
00:07:52.493  user	0m1.631s
00:07:52.493  sys	0m0.136s
00:07:52.493   23:41:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:52.493   23:41:23	-- common/autotest_common.sh@10 -- # set +x
00:07:52.493  ************************************
00:07:52.493  END TEST event_reactor
00:07:52.493  ************************************
00:07:52.493   23:41:23	-- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:52.493   23:41:23	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:52.493   23:41:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:52.493   23:41:23	-- common/autotest_common.sh@10 -- # set +x
00:07:52.493  ************************************
00:07:52.493  START TEST event_reactor_perf
00:07:52.493  ************************************
00:07:52.493   23:41:23	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:52.493  [2024-12-13 23:41:23.130925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:52.493  [2024-12-13 23:41:23.131295] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104185 ]
00:07:52.751  [2024-12-13 23:41:23.298372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.010  [2024-12-13 23:41:23.502565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:54.384  test_start
00:07:54.384  test_end
00:07:54.384  Performance:   381358 events per second
00:07:54.384  
00:07:54.384  real	0m1.832s
00:07:54.384  user	0m1.622s
00:07:54.384  sys	0m0.109s
00:07:54.384   23:41:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:54.384   23:41:24	-- common/autotest_common.sh@10 -- # set +x
00:07:54.384  ************************************
00:07:54.384  END TEST event_reactor_perf
00:07:54.384  ************************************
00:07:54.384    23:41:24	-- event/event.sh@49 -- # uname -s
00:07:54.384   23:41:24	-- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:54.384   23:41:24	-- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:54.384   23:41:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:54.384   23:41:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:54.384   23:41:24	-- common/autotest_common.sh@10 -- # set +x
00:07:54.384  ************************************
00:07:54.384  START TEST event_scheduler
00:07:54.384  ************************************
00:07:54.384   23:41:24	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:54.384  * Looking for test storage...
00:07:54.384  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:07:54.384    23:41:25	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:54.384     23:41:25	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:54.384     23:41:25	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:54.642    23:41:25	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:54.642    23:41:25	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:54.642    23:41:25	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:54.642    23:41:25	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:54.642    23:41:25	-- scripts/common.sh@335 -- # IFS=.-:
00:07:54.642    23:41:25	-- scripts/common.sh@335 -- # read -ra ver1
00:07:54.642    23:41:25	-- scripts/common.sh@336 -- # IFS=.-:
00:07:54.642    23:41:25	-- scripts/common.sh@336 -- # read -ra ver2
00:07:54.642    23:41:25	-- scripts/common.sh@337 -- # local 'op=<'
00:07:54.642    23:41:25	-- scripts/common.sh@339 -- # ver1_l=2
00:07:54.642    23:41:25	-- scripts/common.sh@340 -- # ver2_l=1
00:07:54.642    23:41:25	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:54.642    23:41:25	-- scripts/common.sh@343 -- # case "$op" in
00:07:54.642    23:41:25	-- scripts/common.sh@344 -- # : 1
00:07:54.642    23:41:25	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:54.642    23:41:25	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:54.642     23:41:25	-- scripts/common.sh@364 -- # decimal 1
00:07:54.642     23:41:25	-- scripts/common.sh@352 -- # local d=1
00:07:54.642     23:41:25	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:54.642     23:41:25	-- scripts/common.sh@354 -- # echo 1
00:07:54.642    23:41:25	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:54.642     23:41:25	-- scripts/common.sh@365 -- # decimal 2
00:07:54.642     23:41:25	-- scripts/common.sh@352 -- # local d=2
00:07:54.642     23:41:25	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:54.642     23:41:25	-- scripts/common.sh@354 -- # echo 2
00:07:54.642    23:41:25	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:54.642    23:41:25	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:54.642    23:41:25	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:54.642    23:41:25	-- scripts/common.sh@367 -- # return 0
00:07:54.642    23:41:25	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:54.642    23:41:25	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:54.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:54.642  		--rc genhtml_branch_coverage=1
00:07:54.642  		--rc genhtml_function_coverage=1
00:07:54.642  		--rc genhtml_legend=1
00:07:54.642  		--rc geninfo_all_blocks=1
00:07:54.642  		--rc geninfo_unexecuted_blocks=1
00:07:54.642  		
00:07:54.642  		'
00:07:54.642    23:41:25	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:54.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:54.642  		--rc genhtml_branch_coverage=1
00:07:54.642  		--rc genhtml_function_coverage=1
00:07:54.642  		--rc genhtml_legend=1
00:07:54.642  		--rc geninfo_all_blocks=1
00:07:54.642  		--rc geninfo_unexecuted_blocks=1
00:07:54.642  		
00:07:54.642  		'
00:07:54.642    23:41:25	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:54.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:54.642  		--rc genhtml_branch_coverage=1
00:07:54.642  		--rc genhtml_function_coverage=1
00:07:54.642  		--rc genhtml_legend=1
00:07:54.642  		--rc geninfo_all_blocks=1
00:07:54.642  		--rc geninfo_unexecuted_blocks=1
00:07:54.642  		
00:07:54.642  		'
00:07:54.642    23:41:25	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:54.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:54.642  		--rc genhtml_branch_coverage=1
00:07:54.642  		--rc genhtml_function_coverage=1
00:07:54.642  		--rc genhtml_legend=1
00:07:54.642  		--rc geninfo_all_blocks=1
00:07:54.642  		--rc geninfo_unexecuted_blocks=1
00:07:54.642  		
00:07:54.642  		'
00:07:54.642   23:41:25	-- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:54.642   23:41:25	-- scheduler/scheduler.sh@35 -- # scheduler_pid=104269
00:07:54.642   23:41:25	-- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:54.642   23:41:25	-- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:54.642   23:41:25	-- scheduler/scheduler.sh@37 -- # waitforlisten 104269
00:07:54.642   23:41:25	-- common/autotest_common.sh@829 -- # '[' -z 104269 ']'
00:07:54.642   23:41:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:54.642   23:41:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:54.642  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:54.642   23:41:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:54.642   23:41:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:54.642   23:41:25	-- common/autotest_common.sh@10 -- # set +x
00:07:54.642  [2024-12-13 23:41:25.239215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:54.642  [2024-12-13 23:41:25.239539] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104269 ]
00:07:54.900  [2024-12-13 23:41:25.451359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:55.158  [2024-12-13 23:41:25.784190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.158  [2024-12-13 23:41:25.784278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:55.158  [2024-12-13 23:41:25.784434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:55.158  [2024-12-13 23:41:25.784439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:55.725   23:41:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:55.725   23:41:26	-- common/autotest_common.sh@862 -- # return 0
00:07:55.725   23:41:26	-- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:55.725   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.725   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.725  POWER: Env isn't set yet!
00:07:55.725  POWER: Attempting to initialise ACPI cpufreq power management...
00:07:55.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:55.725  POWER: Cannot set governor of lcore 0 to userspace
00:07:55.725  POWER: Attempting to initialise PSTAT power management...
00:07:55.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:55.725  POWER: Cannot set governor of lcore 0 to performance
00:07:55.725  POWER: Attempting to initialise AMD PSTATE power management...
00:07:55.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:55.725  POWER: Cannot set governor of lcore 0 to userspace
00:07:55.725  POWER: Attempting to initialise CPPC power management...
00:07:55.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:55.725  POWER: Cannot set governor of lcore 0 to userspace
00:07:55.725  POWER: Attempting to initialise VM power management...
00:07:55.725  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:07:55.725  POWER: Unable to set Power Management Environment for lcore 0
00:07:55.725  [2024-12-13 23:41:26.251289] dpdk_governor.c:  88:_init_core: *ERROR*: Failed to initialize on core0
00:07:55.725  [2024-12-13 23:41:26.251358] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0
00:07:55.725  [2024-12-13 23:41:26.251380] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor
00:07:55.725  [2024-12-13 23:41:26.251446] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:55.725  [2024-12-13 23:41:26.251478] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:55.725  [2024-12-13 23:41:26.251513] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:55.725   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.725   23:41:26	-- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:55.725   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.725   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.983  [2024-12-13 23:41:26.618886] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:55.983   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.983   23:41:26	-- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:55.983   23:41:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:55.983   23:41:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:55.983   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.983  ************************************
00:07:55.983  START TEST scheduler_create_thread
00:07:55.983  ************************************
00:07:55.983   23:41:26	-- common/autotest_common.sh@1114 -- # scheduler_create_thread
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  2
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  3
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  4
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  5
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  6
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  7
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  8
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  9
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:55.984   23:41:26	-- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:55.984   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:55.984   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:55.984  10
00:07:55.984   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:56.241    23:41:26	-- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:56.241    23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:56.241    23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:56.241    23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:56.241   23:41:26	-- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:56.241   23:41:26	-- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:56.242   23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:56.242   23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:56.242   23:41:26	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:56.242    23:41:26	-- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:56.242    23:41:26	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:56.242    23:41:26	-- common/autotest_common.sh@10 -- # set +x
00:07:57.614    23:41:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:57.614   23:41:28	-- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:57.614   23:41:28	-- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:57.614   23:41:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:57.614   23:41:28	-- common/autotest_common.sh@10 -- # set +x
00:07:58.549   23:41:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:58.549  
00:07:58.549  real	0m2.638s
00:07:58.549  user	0m0.012s
00:07:58.549  sys	0m0.009s
00:07:58.549  ************************************
00:07:58.549  END TEST scheduler_create_thread
00:07:58.549  ************************************
00:07:58.549   23:41:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:58.549   23:41:29	-- common/autotest_common.sh@10 -- # set +x
00:07:58.807   23:41:29	-- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:58.807   23:41:29	-- scheduler/scheduler.sh@46 -- # killprocess 104269
00:07:58.807   23:41:29	-- common/autotest_common.sh@936 -- # '[' -z 104269 ']'
00:07:58.807   23:41:29	-- common/autotest_common.sh@940 -- # kill -0 104269
00:07:58.807    23:41:29	-- common/autotest_common.sh@941 -- # uname
00:07:58.807   23:41:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:58.807    23:41:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104269
00:07:58.807   23:41:29	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:07:58.807  killing process with pid 104269
00:07:58.807   23:41:29	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:07:58.807   23:41:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 104269'
00:07:58.807   23:41:29	-- common/autotest_common.sh@955 -- # kill 104269
00:07:58.807   23:41:29	-- common/autotest_common.sh@960 -- # wait 104269
00:07:59.065  [2024-12-13 23:41:29.752685] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:08:00.437  
00:08:00.437  real	0m5.836s
00:08:00.437  user	0m9.544s
00:08:00.437  sys	0m0.535s
00:08:00.437   23:41:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:00.437   23:41:30	-- common/autotest_common.sh@10 -- # set +x
00:08:00.437  ************************************
00:08:00.437  END TEST event_scheduler
00:08:00.437  ************************************
00:08:00.437   23:41:30	-- event/event.sh@51 -- # modprobe -n nbd
00:08:00.437   23:41:30	-- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:08:00.437   23:41:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:00.437   23:41:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:00.437   23:41:30	-- common/autotest_common.sh@10 -- # set +x
00:08:00.437  ************************************
00:08:00.437  START TEST app_repeat
00:08:00.437  ************************************
00:08:00.437   23:41:30	-- common/autotest_common.sh@1114 -- # app_repeat_test
00:08:00.437   23:41:30	-- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:00.437   23:41:30	-- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:00.437   23:41:30	-- event/event.sh@13 -- # local nbd_list
00:08:00.437   23:41:30	-- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:00.437   23:41:30	-- event/event.sh@14 -- # local bdev_list
00:08:00.437   23:41:30	-- event/event.sh@15 -- # local repeat_times=4
00:08:00.437   23:41:30	-- event/event.sh@17 -- # modprobe nbd
00:08:00.437   23:41:30	-- event/event.sh@19 -- # repeat_pid=104394
00:08:00.437   23:41:30	-- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:08:00.437   23:41:30	-- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:08:00.437  Process app_repeat pid: 104394
00:08:00.437   23:41:30	-- event/event.sh@21 -- # echo 'Process app_repeat pid: 104394'
00:08:00.437   23:41:30	-- event/event.sh@23 -- # for i in {0..2}
00:08:00.437  spdk_app_start Round 0
00:08:00.437   23:41:30	-- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:08:00.437   23:41:30	-- event/event.sh@25 -- # waitforlisten 104394 /var/tmp/spdk-nbd.sock
00:08:00.437   23:41:30	-- common/autotest_common.sh@829 -- # '[' -z 104394 ']'
00:08:00.437   23:41:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:00.437   23:41:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:00.437  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:00.437   23:41:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:00.437   23:41:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:00.437   23:41:30	-- common/autotest_common.sh@10 -- # set +x
00:08:00.437  [2024-12-13 23:41:30.923541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:00.437  [2024-12-13 23:41:30.923745] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104394 ]
00:08:00.437  [2024-12-13 23:41:31.104827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:00.696  [2024-12-13 23:41:31.293873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:00.696  [2024-12-13 23:41:31.293871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:01.263   23:41:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:01.263   23:41:31	-- common/autotest_common.sh@862 -- # return 0
00:08:01.263   23:41:31	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:01.522  Malloc0
00:08:01.522   23:41:32	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:01.780  Malloc1
00:08:01.780   23:41:32	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@12 -- # local i
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:01.780   23:41:32	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:02.038  /dev/nbd0
00:08:02.038    23:41:32	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:02.038   23:41:32	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:02.038   23:41:32	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:08:02.038   23:41:32	-- common/autotest_common.sh@867 -- # local i
00:08:02.038   23:41:32	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:02.038   23:41:32	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:02.038   23:41:32	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:08:02.038   23:41:32	-- common/autotest_common.sh@871 -- # break
00:08:02.038   23:41:32	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:02.038   23:41:32	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:02.038   23:41:32	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:02.038  1+0 records in
00:08:02.038  1+0 records out
00:08:02.038  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029385 s, 13.9 MB/s
00:08:02.038    23:41:32	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:02.039   23:41:32	-- common/autotest_common.sh@884 -- # size=4096
00:08:02.039   23:41:32	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:02.039   23:41:32	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:02.039   23:41:32	-- common/autotest_common.sh@887 -- # return 0
00:08:02.039   23:41:32	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:02.039   23:41:32	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:02.039   23:41:32	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:02.297  /dev/nbd1
00:08:02.297    23:41:33	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:02.297   23:41:33	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:02.297   23:41:33	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:08:02.297   23:41:33	-- common/autotest_common.sh@867 -- # local i
00:08:02.297   23:41:33	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:02.297   23:41:33	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:02.297   23:41:33	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:08:02.297   23:41:33	-- common/autotest_common.sh@871 -- # break
00:08:02.297   23:41:33	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:02.297   23:41:33	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:02.297   23:41:33	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:02.297  1+0 records in
00:08:02.297  1+0 records out
00:08:02.297  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002993 s, 13.7 MB/s
00:08:02.297    23:41:33	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:02.555   23:41:33	-- common/autotest_common.sh@884 -- # size=4096
00:08:02.556   23:41:33	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:02.556   23:41:33	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:02.556   23:41:33	-- common/autotest_common.sh@887 -- # return 0
00:08:02.556   23:41:33	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:02.556   23:41:33	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:02.556    23:41:33	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:02.556    23:41:33	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:02.556     23:41:33	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:02.815    23:41:33	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:02.815    {
00:08:02.815      "nbd_device": "/dev/nbd0",
00:08:02.815      "bdev_name": "Malloc0"
00:08:02.815    },
00:08:02.815    {
00:08:02.815      "nbd_device": "/dev/nbd1",
00:08:02.815      "bdev_name": "Malloc1"
00:08:02.815    }
00:08:02.815  ]'
00:08:02.815     23:41:33	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:02.815     23:41:33	-- bdev/nbd_common.sh@64 -- # echo '[
00:08:02.815    {
00:08:02.815      "nbd_device": "/dev/nbd0",
00:08:02.815      "bdev_name": "Malloc0"
00:08:02.815    },
00:08:02.815    {
00:08:02.815      "nbd_device": "/dev/nbd1",
00:08:02.815      "bdev_name": "Malloc1"
00:08:02.815    }
00:08:02.815  ]'
00:08:02.815    23:41:33	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:02.815  /dev/nbd1'
00:08:02.815     23:41:33	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:02.815  /dev/nbd1'
00:08:02.815     23:41:33	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:02.815    23:41:33	-- bdev/nbd_common.sh@65 -- # count=2
00:08:02.815    23:41:33	-- bdev/nbd_common.sh@66 -- # echo 2
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@95 -- # count=2
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@71 -- # local operation=write
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:02.815  256+0 records in
00:08:02.815  256+0 records out
00:08:02.815  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080523 s, 130 MB/s
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:02.815  256+0 records in
00:08:02.815  256+0 records out
00:08:02.815  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227912 s, 46.0 MB/s
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:02.815  256+0 records in
00:08:02.815  256+0 records out
00:08:02.815  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265278 s, 39.5 MB/s
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@51 -- # local i
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:02.815   23:41:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:03.074    23:41:33	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@41 -- # break
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@45 -- # return 0
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:03.074   23:41:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:03.333    23:41:33	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@41 -- # break
00:08:03.333   23:41:33	-- bdev/nbd_common.sh@45 -- # return 0
00:08:03.333    23:41:33	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:03.333    23:41:33	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:03.333     23:41:33	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:03.592    23:41:34	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:03.592     23:41:34	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:03.592     23:41:34	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:03.592    23:41:34	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:03.592     23:41:34	-- bdev/nbd_common.sh@65 -- # echo ''
00:08:03.592     23:41:34	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:03.592     23:41:34	-- bdev/nbd_common.sh@65 -- # true
00:08:03.592    23:41:34	-- bdev/nbd_common.sh@65 -- # count=0
00:08:03.592    23:41:34	-- bdev/nbd_common.sh@66 -- # echo 0
00:08:03.592   23:41:34	-- bdev/nbd_common.sh@104 -- # count=0
00:08:03.592   23:41:34	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:03.592   23:41:34	-- bdev/nbd_common.sh@109 -- # return 0
00:08:03.592   23:41:34	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:04.159   23:41:34	-- event/event.sh@35 -- # sleep 3
00:08:05.094  [2024-12-13 23:41:35.718008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:05.353  [2024-12-13 23:41:35.875764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:05.353  [2024-12-13 23:41:35.875767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:05.353  [2024-12-13 23:41:36.048698] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:05.353  [2024-12-13 23:41:36.048869] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:07.284   23:41:37	-- event/event.sh@23 -- # for i in {0..2}
00:08:07.284  spdk_app_start Round 1
00:08:07.284   23:41:37	-- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:08:07.284   23:41:37	-- event/event.sh@25 -- # waitforlisten 104394 /var/tmp/spdk-nbd.sock
00:08:07.284   23:41:37	-- common/autotest_common.sh@829 -- # '[' -z 104394 ']'
00:08:07.284   23:41:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:07.284   23:41:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:07.284  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:07.284   23:41:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:07.284   23:41:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:07.284   23:41:37	-- common/autotest_common.sh@10 -- # set +x
00:08:07.284   23:41:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:07.284   23:41:37	-- common/autotest_common.sh@862 -- # return 0
00:08:07.284   23:41:37	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:07.543  Malloc0
00:08:07.543   23:41:38	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:07.802  Malloc1
00:08:07.802   23:41:38	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@12 -- # local i
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:07.802   23:41:38	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:08.060  /dev/nbd0
00:08:08.060    23:41:38	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:08.060   23:41:38	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:08.060   23:41:38	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:08:08.060   23:41:38	-- common/autotest_common.sh@867 -- # local i
00:08:08.060   23:41:38	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:08.060   23:41:38	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:08.061   23:41:38	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:08:08.061   23:41:38	-- common/autotest_common.sh@871 -- # break
00:08:08.061   23:41:38	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:08.061   23:41:38	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:08.061   23:41:38	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:08.061  1+0 records in
00:08:08.061  1+0 records out
00:08:08.061  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317363 s, 12.9 MB/s
00:08:08.061    23:41:38	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:08.061   23:41:38	-- common/autotest_common.sh@884 -- # size=4096
00:08:08.061   23:41:38	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:08.061   23:41:38	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:08.061   23:41:38	-- common/autotest_common.sh@887 -- # return 0
00:08:08.061   23:41:38	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:08.061   23:41:38	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:08.061   23:41:38	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:08.319  /dev/nbd1
00:08:08.319    23:41:39	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:08.319   23:41:39	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:08.319   23:41:39	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:08:08.319   23:41:39	-- common/autotest_common.sh@867 -- # local i
00:08:08.319   23:41:39	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:08.319   23:41:39	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:08.319   23:41:39	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:08:08.319   23:41:39	-- common/autotest_common.sh@871 -- # break
00:08:08.319   23:41:39	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:08.319   23:41:39	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:08.319   23:41:39	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:08.319  1+0 records in
00:08:08.319  1+0 records out
00:08:08.319  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327645 s, 12.5 MB/s
00:08:08.319    23:41:39	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:08.319   23:41:39	-- common/autotest_common.sh@884 -- # size=4096
00:08:08.319   23:41:39	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:08.319   23:41:39	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:08.319   23:41:39	-- common/autotest_common.sh@887 -- # return 0
00:08:08.319   23:41:39	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:08.319   23:41:39	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:08.319    23:41:39	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:08.319    23:41:39	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:08.319     23:41:39	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:08.577    23:41:39	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:08.577    {
00:08:08.577      "nbd_device": "/dev/nbd0",
00:08:08.577      "bdev_name": "Malloc0"
00:08:08.577    },
00:08:08.577    {
00:08:08.577      "nbd_device": "/dev/nbd1",
00:08:08.577      "bdev_name": "Malloc1"
00:08:08.577    }
00:08:08.577  ]'
00:08:08.577     23:41:39	-- bdev/nbd_common.sh@64 -- # echo '[
00:08:08.577    {
00:08:08.577      "nbd_device": "/dev/nbd0",
00:08:08.577      "bdev_name": "Malloc0"
00:08:08.577    },
00:08:08.577    {
00:08:08.577      "nbd_device": "/dev/nbd1",
00:08:08.577      "bdev_name": "Malloc1"
00:08:08.577    }
00:08:08.577  ]'
00:08:08.578     23:41:39	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:08.836    23:41:39	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:08.836  /dev/nbd1'
00:08:08.836     23:41:39	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:08.836  /dev/nbd1'
00:08:08.836     23:41:39	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:08.836    23:41:39	-- bdev/nbd_common.sh@65 -- # count=2
00:08:08.836    23:41:39	-- bdev/nbd_common.sh@66 -- # echo 2
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@95 -- # count=2
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@71 -- # local operation=write
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:08.836  256+0 records in
00:08:08.836  256+0 records out
00:08:08.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011005 s, 95.3 MB/s
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:08.836  256+0 records in
00:08:08.836  256+0 records out
00:08:08.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225026 s, 46.6 MB/s
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:08.836  256+0 records in
00:08:08.836  256+0 records out
00:08:08.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324728 s, 32.3 MB/s
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@51 -- # local i
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:08.836   23:41:39	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:09.094    23:41:39	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@41 -- # break
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@45 -- # return 0
00:08:09.094   23:41:39	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:09.095   23:41:39	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:09.352    23:41:39	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@41 -- # break
00:08:09.352   23:41:39	-- bdev/nbd_common.sh@45 -- # return 0
00:08:09.352    23:41:39	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:09.352    23:41:39	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:09.352     23:41:39	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:09.611    23:41:40	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:09.611     23:41:40	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:09.611     23:41:40	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:09.611    23:41:40	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:09.611     23:41:40	-- bdev/nbd_common.sh@65 -- # echo ''
00:08:09.611     23:41:40	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:09.611     23:41:40	-- bdev/nbd_common.sh@65 -- # true
00:08:09.611    23:41:40	-- bdev/nbd_common.sh@65 -- # count=0
00:08:09.611    23:41:40	-- bdev/nbd_common.sh@66 -- # echo 0
00:08:09.611   23:41:40	-- bdev/nbd_common.sh@104 -- # count=0
00:08:09.611   23:41:40	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:09.611   23:41:40	-- bdev/nbd_common.sh@109 -- # return 0
00:08:09.611   23:41:40	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:10.177   23:41:40	-- event/event.sh@35 -- # sleep 3
00:08:11.114  [2024-12-13 23:41:41.727660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:11.373  [2024-12-13 23:41:41.885852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:11.373  [2024-12-13 23:41:41.885861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:11.373  [2024-12-13 23:41:42.058318] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:11.373  [2024-12-13 23:41:42.058442] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:13.277   23:41:43	-- event/event.sh@23 -- # for i in {0..2}
00:08:13.277  spdk_app_start Round 2
00:08:13.277   23:41:43	-- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:08:13.277   23:41:43	-- event/event.sh@25 -- # waitforlisten 104394 /var/tmp/spdk-nbd.sock
00:08:13.277   23:41:43	-- common/autotest_common.sh@829 -- # '[' -z 104394 ']'
00:08:13.277   23:41:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:13.277   23:41:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:13.277  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:13.277   23:41:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:13.277   23:41:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:13.277   23:41:43	-- common/autotest_common.sh@10 -- # set +x
00:08:13.277   23:41:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:13.277   23:41:43	-- common/autotest_common.sh@862 -- # return 0
00:08:13.277   23:41:43	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:13.536  Malloc0
00:08:13.536   23:41:44	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:13.794  Malloc1
00:08:13.794   23:41:44	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:13.794   23:41:44	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@12 -- # local i
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:14.053  /dev/nbd0
00:08:14.053    23:41:44	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:14.053   23:41:44	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:08:14.053   23:41:44	-- common/autotest_common.sh@867 -- # local i
00:08:14.053   23:41:44	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:14.053   23:41:44	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:14.053   23:41:44	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:08:14.053   23:41:44	-- common/autotest_common.sh@871 -- # break
00:08:14.053   23:41:44	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:14.053   23:41:44	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:14.053   23:41:44	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:14.053  1+0 records in
00:08:14.053  1+0 records out
00:08:14.053  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051017 s, 8.0 MB/s
00:08:14.053    23:41:44	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:14.053   23:41:44	-- common/autotest_common.sh@884 -- # size=4096
00:08:14.053   23:41:44	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:14.053   23:41:44	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:14.053   23:41:44	-- common/autotest_common.sh@887 -- # return 0
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:14.053   23:41:44	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:14.312  /dev/nbd1
00:08:14.576    23:41:45	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:14.576   23:41:45	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:14.576   23:41:45	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:08:14.576   23:41:45	-- common/autotest_common.sh@867 -- # local i
00:08:14.576   23:41:45	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:14.576   23:41:45	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:14.576   23:41:45	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:08:14.576   23:41:45	-- common/autotest_common.sh@871 -- # break
00:08:14.576   23:41:45	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:14.576   23:41:45	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:14.576   23:41:45	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:14.576  1+0 records in
00:08:14.576  1+0 records out
00:08:14.576  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267094 s, 15.3 MB/s
00:08:14.576    23:41:45	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:14.576   23:41:45	-- common/autotest_common.sh@884 -- # size=4096
00:08:14.576   23:41:45	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:08:14.576   23:41:45	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:14.576   23:41:45	-- common/autotest_common.sh@887 -- # return 0
00:08:14.576   23:41:45	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:14.576   23:41:45	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:14.576    23:41:45	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:14.576    23:41:45	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.576     23:41:45	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:14.836    23:41:45	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:14.836    {
00:08:14.836      "nbd_device": "/dev/nbd0",
00:08:14.836      "bdev_name": "Malloc0"
00:08:14.836    },
00:08:14.836    {
00:08:14.836      "nbd_device": "/dev/nbd1",
00:08:14.836      "bdev_name": "Malloc1"
00:08:14.836    }
00:08:14.836  ]'
00:08:14.836     23:41:45	-- bdev/nbd_common.sh@64 -- # echo '[
00:08:14.836    {
00:08:14.836      "nbd_device": "/dev/nbd0",
00:08:14.836      "bdev_name": "Malloc0"
00:08:14.836    },
00:08:14.836    {
00:08:14.836      "nbd_device": "/dev/nbd1",
00:08:14.836      "bdev_name": "Malloc1"
00:08:14.836    }
00:08:14.836  ]'
00:08:14.836     23:41:45	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:14.836    23:41:45	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:14.836  /dev/nbd1'
00:08:14.836     23:41:45	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:14.836  /dev/nbd1'
00:08:14.836     23:41:45	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:14.836    23:41:45	-- bdev/nbd_common.sh@65 -- # count=2
00:08:14.836    23:41:45	-- bdev/nbd_common.sh@66 -- # echo 2
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@95 -- # count=2
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@71 -- # local operation=write
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:14.836  256+0 records in
00:08:14.836  256+0 records out
00:08:14.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00960744 s, 109 MB/s
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:14.836  256+0 records in
00:08:14.836  256+0 records out
00:08:14.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275601 s, 38.0 MB/s
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:14.836  256+0 records in
00:08:14.836  256+0 records out
00:08:14.836  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268727 s, 39.0 MB/s
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@51 -- # local i
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:14.836   23:41:45	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:15.095    23:41:45	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@41 -- # break
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@45 -- # return 0
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:15.095   23:41:45	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:15.353    23:41:45	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@41 -- # break
00:08:15.353   23:41:45	-- bdev/nbd_common.sh@45 -- # return 0
00:08:15.353    23:41:45	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:15.353    23:41:45	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:15.353     23:41:45	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:15.612    23:41:46	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:15.612     23:41:46	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:15.612     23:41:46	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:15.612    23:41:46	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:15.612     23:41:46	-- bdev/nbd_common.sh@65 -- # echo ''
00:08:15.612     23:41:46	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:15.612     23:41:46	-- bdev/nbd_common.sh@65 -- # true
00:08:15.612    23:41:46	-- bdev/nbd_common.sh@65 -- # count=0
00:08:15.612    23:41:46	-- bdev/nbd_common.sh@66 -- # echo 0
00:08:15.612   23:41:46	-- bdev/nbd_common.sh@104 -- # count=0
00:08:15.612   23:41:46	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:15.612   23:41:46	-- bdev/nbd_common.sh@109 -- # return 0
00:08:15.612   23:41:46	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:16.179   23:41:46	-- event/event.sh@35 -- # sleep 3
00:08:17.114  [2024-12-13 23:41:47.701832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:17.373  [2024-12-13 23:41:47.859732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:17.373  [2024-12-13 23:41:47.859740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.373  [2024-12-13 23:41:48.038323] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:17.373  [2024-12-13 23:41:48.038439] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:19.275   23:41:49	-- event/event.sh@38 -- # waitforlisten 104394 /var/tmp/spdk-nbd.sock
00:08:19.275   23:41:49	-- common/autotest_common.sh@829 -- # '[' -z 104394 ']'
00:08:19.275   23:41:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:19.275   23:41:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:19.275  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:19.275   23:41:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:19.275   23:41:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:19.275   23:41:49	-- common/autotest_common.sh@10 -- # set +x
00:08:19.275   23:41:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:19.275   23:41:49	-- common/autotest_common.sh@862 -- # return 0
00:08:19.275   23:41:49	-- event/event.sh@39 -- # killprocess 104394
00:08:19.275   23:41:49	-- common/autotest_common.sh@936 -- # '[' -z 104394 ']'
00:08:19.275   23:41:49	-- common/autotest_common.sh@940 -- # kill -0 104394
00:08:19.275    23:41:49	-- common/autotest_common.sh@941 -- # uname
00:08:19.275   23:41:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:19.275    23:41:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104394
00:08:19.275   23:41:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:19.275   23:41:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:19.275  killing process with pid 104394
00:08:19.275   23:41:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 104394'
00:08:19.275   23:41:49	-- common/autotest_common.sh@955 -- # kill 104394
00:08:19.275   23:41:49	-- common/autotest_common.sh@960 -- # wait 104394
00:08:20.649  spdk_app_start is called in Round 0.
00:08:20.649  Shutdown signal received, stop current app iteration
00:08:20.649  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:08:20.649  spdk_app_start is called in Round 1.
00:08:20.649  Shutdown signal received, stop current app iteration
00:08:20.649  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:08:20.649  spdk_app_start is called in Round 2.
00:08:20.649  Shutdown signal received, stop current app iteration
00:08:20.649  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:08:20.649  spdk_app_start is called in Round 3.
00:08:20.649  Shutdown signal received, stop current app iteration
00:08:20.649   23:41:51	-- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:08:20.649   23:41:51	-- event/event.sh@42 -- # return 0
00:08:20.649  
00:08:20.649  real	0m20.263s
00:08:20.649  user	0m43.325s
00:08:20.649  sys	0m2.986s
00:08:20.649   23:41:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:20.649   23:41:51	-- common/autotest_common.sh@10 -- # set +x
00:08:20.649  ************************************
00:08:20.649  END TEST app_repeat
00:08:20.649  ************************************
00:08:20.649   23:41:51	-- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:08:20.649   23:41:51	-- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:08:20.649   23:41:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:20.649   23:41:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:20.649   23:41:51	-- common/autotest_common.sh@10 -- # set +x
00:08:20.649  ************************************
00:08:20.649  START TEST cpu_locks
00:08:20.649  ************************************
00:08:20.650   23:41:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:08:20.650  * Looking for test storage...
00:08:20.650  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:08:20.650    23:41:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:20.650     23:41:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:20.650     23:41:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:20.650    23:41:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:20.650    23:41:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:20.650    23:41:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:20.650    23:41:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:20.650    23:41:51	-- scripts/common.sh@335 -- # IFS=.-:
00:08:20.650    23:41:51	-- scripts/common.sh@335 -- # read -ra ver1
00:08:20.650    23:41:51	-- scripts/common.sh@336 -- # IFS=.-:
00:08:20.650    23:41:51	-- scripts/common.sh@336 -- # read -ra ver2
00:08:20.650    23:41:51	-- scripts/common.sh@337 -- # local 'op=<'
00:08:20.650    23:41:51	-- scripts/common.sh@339 -- # ver1_l=2
00:08:20.650    23:41:51	-- scripts/common.sh@340 -- # ver2_l=1
00:08:20.650    23:41:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:20.650    23:41:51	-- scripts/common.sh@343 -- # case "$op" in
00:08:20.650    23:41:51	-- scripts/common.sh@344 -- # : 1
00:08:20.650    23:41:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:20.650    23:41:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:20.650     23:41:51	-- scripts/common.sh@364 -- # decimal 1
00:08:20.650     23:41:51	-- scripts/common.sh@352 -- # local d=1
00:08:20.650     23:41:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.650     23:41:51	-- scripts/common.sh@354 -- # echo 1
00:08:20.650    23:41:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:20.650     23:41:51	-- scripts/common.sh@365 -- # decimal 2
00:08:20.650     23:41:51	-- scripts/common.sh@352 -- # local d=2
00:08:20.650     23:41:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.650     23:41:51	-- scripts/common.sh@354 -- # echo 2
00:08:20.650    23:41:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:20.650    23:41:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:20.650    23:41:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:20.650    23:41:51	-- scripts/common.sh@367 -- # return 0
00:08:20.650    23:41:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:20.650    23:41:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:20.650  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.650  		--rc genhtml_branch_coverage=1
00:08:20.650  		--rc genhtml_function_coverage=1
00:08:20.650  		--rc genhtml_legend=1
00:08:20.650  		--rc geninfo_all_blocks=1
00:08:20.650  		--rc geninfo_unexecuted_blocks=1
00:08:20.650  		
00:08:20.650  		'
00:08:20.650    23:41:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:20.650  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.650  		--rc genhtml_branch_coverage=1
00:08:20.650  		--rc genhtml_function_coverage=1
00:08:20.650  		--rc genhtml_legend=1
00:08:20.650  		--rc geninfo_all_blocks=1
00:08:20.650  		--rc geninfo_unexecuted_blocks=1
00:08:20.650  		
00:08:20.650  		'
00:08:20.650    23:41:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:20.650  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.650  		--rc genhtml_branch_coverage=1
00:08:20.650  		--rc genhtml_function_coverage=1
00:08:20.650  		--rc genhtml_legend=1
00:08:20.650  		--rc geninfo_all_blocks=1
00:08:20.650  		--rc geninfo_unexecuted_blocks=1
00:08:20.650  		
00:08:20.650  		'
00:08:20.650    23:41:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:20.650  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.650  		--rc genhtml_branch_coverage=1
00:08:20.650  		--rc genhtml_function_coverage=1
00:08:20.650  		--rc genhtml_legend=1
00:08:20.650  		--rc geninfo_all_blocks=1
00:08:20.650  		--rc geninfo_unexecuted_blocks=1
00:08:20.650  		
00:08:20.650  		'
00:08:20.650   23:41:51	-- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:08:20.650   23:41:51	-- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:08:20.650   23:41:51	-- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:08:20.650   23:41:51	-- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:08:20.650   23:41:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:20.650   23:41:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:20.650   23:41:51	-- common/autotest_common.sh@10 -- # set +x
00:08:20.650  ************************************
00:08:20.650  START TEST default_locks
00:08:20.650  ************************************
00:08:20.650   23:41:51	-- common/autotest_common.sh@1114 -- # default_locks
00:08:20.650   23:41:51	-- event/cpu_locks.sh@46 -- # spdk_tgt_pid=104942
00:08:20.650   23:41:51	-- event/cpu_locks.sh@47 -- # waitforlisten 104942
00:08:20.650   23:41:51	-- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:20.650   23:41:51	-- common/autotest_common.sh@829 -- # '[' -z 104942 ']'
00:08:20.650   23:41:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:20.650  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:20.650   23:41:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:20.650   23:41:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:20.650   23:41:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:20.650   23:41:51	-- common/autotest_common.sh@10 -- # set +x
00:08:20.908  [2024-12-13 23:41:51.445489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:20.908  [2024-12-13 23:41:51.445703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104942 ]
00:08:20.908  [2024-12-13 23:41:51.613401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:21.166  [2024-12-13 23:41:51.817751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:21.166  [2024-12-13 23:41:51.818031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:22.578   23:41:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:22.578   23:41:53	-- common/autotest_common.sh@862 -- # return 0
00:08:22.578   23:41:53	-- event/cpu_locks.sh@49 -- # locks_exist 104942
00:08:22.578   23:41:53	-- event/cpu_locks.sh@22 -- # lslocks -p 104942
00:08:22.578   23:41:53	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:22.836   23:41:53	-- event/cpu_locks.sh@50 -- # killprocess 104942
00:08:22.836   23:41:53	-- common/autotest_common.sh@936 -- # '[' -z 104942 ']'
00:08:22.836   23:41:53	-- common/autotest_common.sh@940 -- # kill -0 104942
00:08:22.836    23:41:53	-- common/autotest_common.sh@941 -- # uname
00:08:22.836   23:41:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:22.836    23:41:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104942
00:08:22.836   23:41:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:22.836   23:41:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:22.836  killing process with pid 104942
00:08:22.836   23:41:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 104942'
00:08:22.836   23:41:53	-- common/autotest_common.sh@955 -- # kill 104942
00:08:22.836   23:41:53	-- common/autotest_common.sh@960 -- # wait 104942
00:08:24.758   23:41:55	-- event/cpu_locks.sh@52 -- # NOT waitforlisten 104942
00:08:24.758   23:41:55	-- common/autotest_common.sh@650 -- # local es=0
00:08:24.758   23:41:55	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 104942
00:08:24.758   23:41:55	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:24.758   23:41:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:24.758    23:41:55	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:24.758   23:41:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:24.758   23:41:55	-- common/autotest_common.sh@653 -- # waitforlisten 104942
00:08:24.758   23:41:55	-- common/autotest_common.sh@829 -- # '[' -z 104942 ']'
00:08:24.758   23:41:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:24.759   23:41:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:24.759  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:24.759   23:41:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:24.759   23:41:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:24.759   23:41:55	-- common/autotest_common.sh@10 -- # set +x
00:08:24.759  ERROR: process (pid: 104942) is no longer running
00:08:24.759  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (104942) - No such process
00:08:24.759   23:41:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:24.759   23:41:55	-- common/autotest_common.sh@862 -- # return 1
00:08:24.759   23:41:55	-- common/autotest_common.sh@653 -- # es=1
00:08:24.759   23:41:55	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:24.759   23:41:55	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:24.759   23:41:55	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:24.759   23:41:55	-- event/cpu_locks.sh@54 -- # no_locks
00:08:24.759   23:41:55	-- event/cpu_locks.sh@26 -- # lock_files=()
00:08:24.759   23:41:55	-- event/cpu_locks.sh@26 -- # local lock_files
00:08:24.759   23:41:55	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:08:24.759  ************************************
00:08:24.759  END TEST default_locks
00:08:24.759  ************************************
00:08:24.759  
00:08:24.759  real	0m4.042s
00:08:24.759  user	0m4.147s
00:08:24.759  sys	0m0.735s
00:08:24.759   23:41:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:24.759   23:41:55	-- common/autotest_common.sh@10 -- # set +x
00:08:24.759   23:41:55	-- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:08:24.759   23:41:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:24.759   23:41:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:24.759   23:41:55	-- common/autotest_common.sh@10 -- # set +x
00:08:24.759  ************************************
00:08:24.759  START TEST default_locks_via_rpc
00:08:24.759  ************************************
00:08:24.759   23:41:55	-- common/autotest_common.sh@1114 -- # default_locks_via_rpc
00:08:24.759   23:41:55	-- event/cpu_locks.sh@62 -- # spdk_tgt_pid=105025
00:08:24.759   23:41:55	-- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:24.759   23:41:55	-- event/cpu_locks.sh@63 -- # waitforlisten 105025
00:08:24.759   23:41:55	-- common/autotest_common.sh@829 -- # '[' -z 105025 ']'
00:08:24.759   23:41:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:24.759   23:41:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:24.759  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:24.759   23:41:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:24.759   23:41:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:24.759   23:41:55	-- common/autotest_common.sh@10 -- # set +x
00:08:25.017  [2024-12-13 23:41:55.535134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:25.017  [2024-12-13 23:41:55.535335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105025 ]
00:08:25.017  [2024-12-13 23:41:55.705070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:25.276  [2024-12-13 23:41:55.895507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:25.276  [2024-12-13 23:41:55.895768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:26.651   23:41:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:26.651   23:41:57	-- common/autotest_common.sh@862 -- # return 0
00:08:26.651   23:41:57	-- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:08:26.651   23:41:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:26.651   23:41:57	-- common/autotest_common.sh@10 -- # set +x
00:08:26.651   23:41:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:26.651   23:41:57	-- event/cpu_locks.sh@67 -- # no_locks
00:08:26.651   23:41:57	-- event/cpu_locks.sh@26 -- # lock_files=()
00:08:26.651   23:41:57	-- event/cpu_locks.sh@26 -- # local lock_files
00:08:26.651   23:41:57	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:08:26.651   23:41:57	-- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:08:26.651   23:41:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:26.651   23:41:57	-- common/autotest_common.sh@10 -- # set +x
00:08:26.651   23:41:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:26.651   23:41:57	-- event/cpu_locks.sh@71 -- # locks_exist 105025
00:08:26.651   23:41:57	-- event/cpu_locks.sh@22 -- # lslocks -p 105025
00:08:26.651   23:41:57	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:26.910   23:41:57	-- event/cpu_locks.sh@73 -- # killprocess 105025
00:08:26.910   23:41:57	-- common/autotest_common.sh@936 -- # '[' -z 105025 ']'
00:08:26.910   23:41:57	-- common/autotest_common.sh@940 -- # kill -0 105025
00:08:26.910    23:41:57	-- common/autotest_common.sh@941 -- # uname
00:08:26.910   23:41:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:26.910    23:41:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105025
00:08:26.910   23:41:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:26.910  killing process with pid 105025
00:08:26.910   23:41:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:26.910   23:41:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105025'
00:08:26.910   23:41:57	-- common/autotest_common.sh@955 -- # kill 105025
00:08:26.910   23:41:57	-- common/autotest_common.sh@960 -- # wait 105025
00:08:29.441  
00:08:29.441  real	0m4.136s
00:08:29.441  user	0m4.244s
00:08:29.441  sys	0m0.717s
00:08:29.441   23:41:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:29.441  ************************************
00:08:29.441  END TEST default_locks_via_rpc
00:08:29.441  ************************************
00:08:29.441   23:41:59	-- common/autotest_common.sh@10 -- # set +x
00:08:29.441   23:41:59	-- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:08:29.441   23:41:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:29.441   23:41:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:29.441   23:41:59	-- common/autotest_common.sh@10 -- # set +x
00:08:29.441  ************************************
00:08:29.441  START TEST non_locking_app_on_locked_coremask
00:08:29.441  ************************************
00:08:29.441   23:41:59	-- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask
00:08:29.441   23:41:59	-- event/cpu_locks.sh@80 -- # spdk_tgt_pid=105113
00:08:29.441   23:41:59	-- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:29.441   23:41:59	-- event/cpu_locks.sh@81 -- # waitforlisten 105113 /var/tmp/spdk.sock
00:08:29.441   23:41:59	-- common/autotest_common.sh@829 -- # '[' -z 105113 ']'
00:08:29.441   23:41:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:29.441   23:41:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:29.441  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:29.441   23:41:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:29.441   23:41:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:29.441   23:41:59	-- common/autotest_common.sh@10 -- # set +x
00:08:29.441  [2024-12-13 23:41:59.727388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:29.441  [2024-12-13 23:41:59.727611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105113 ]
00:08:29.441  [2024-12-13 23:41:59.897305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:29.441  [2024-12-13 23:42:00.094764] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:29.441  [2024-12-13 23:42:00.095009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.816   23:42:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:30.816   23:42:01	-- common/autotest_common.sh@862 -- # return 0
00:08:30.816   23:42:01	-- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=105141
00:08:30.816   23:42:01	-- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:08:30.816   23:42:01	-- event/cpu_locks.sh@85 -- # waitforlisten 105141 /var/tmp/spdk2.sock
00:08:30.816   23:42:01	-- common/autotest_common.sh@829 -- # '[' -z 105141 ']'
00:08:30.816   23:42:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:30.816   23:42:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:30.816  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:30.816   23:42:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:30.816   23:42:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:30.816   23:42:01	-- common/autotest_common.sh@10 -- # set +x
00:08:30.816  [2024-12-13 23:42:01.358373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:30.816  [2024-12-13 23:42:01.358554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105141 ]
00:08:30.816  [2024-12-13 23:42:01.509218] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:30.816  [2024-12-13 23:42:01.509295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:31.382  [2024-12-13 23:42:01.906833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:31.382  [2024-12-13 23:42:01.907086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:33.285   23:42:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:33.285   23:42:03	-- common/autotest_common.sh@862 -- # return 0
00:08:33.285   23:42:03	-- event/cpu_locks.sh@87 -- # locks_exist 105113
00:08:33.285   23:42:03	-- event/cpu_locks.sh@22 -- # lslocks -p 105113
00:08:33.285   23:42:03	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:33.543   23:42:04	-- event/cpu_locks.sh@89 -- # killprocess 105113
00:08:33.543   23:42:04	-- common/autotest_common.sh@936 -- # '[' -z 105113 ']'
00:08:33.543   23:42:04	-- common/autotest_common.sh@940 -- # kill -0 105113
00:08:33.543    23:42:04	-- common/autotest_common.sh@941 -- # uname
00:08:33.543   23:42:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:33.543    23:42:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105113
00:08:33.543   23:42:04	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:33.543   23:42:04	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:33.543  killing process with pid 105113
00:08:33.543   23:42:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105113'
00:08:33.543   23:42:04	-- common/autotest_common.sh@955 -- # kill 105113
00:08:33.543   23:42:04	-- common/autotest_common.sh@960 -- # wait 105113
00:08:37.730   23:42:07	-- event/cpu_locks.sh@90 -- # killprocess 105141
00:08:37.730   23:42:07	-- common/autotest_common.sh@936 -- # '[' -z 105141 ']'
00:08:37.730   23:42:07	-- common/autotest_common.sh@940 -- # kill -0 105141
00:08:37.730    23:42:07	-- common/autotest_common.sh@941 -- # uname
00:08:37.730   23:42:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:37.730    23:42:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105141
00:08:37.730   23:42:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:37.730   23:42:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:37.730  killing process with pid 105141
00:08:37.730   23:42:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105141'
00:08:37.730   23:42:08	-- common/autotest_common.sh@955 -- # kill 105141
00:08:37.730   23:42:08	-- common/autotest_common.sh@960 -- # wait 105141
00:08:39.663  
00:08:39.663  real	0m10.302s
00:08:39.663  user	0m10.851s
00:08:39.663  sys	0m1.404s
00:08:39.663  ************************************
00:08:39.663  END TEST non_locking_app_on_locked_coremask
00:08:39.663  ************************************
00:08:39.663   23:42:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:39.663   23:42:09	-- common/autotest_common.sh@10 -- # set +x
00:08:39.663   23:42:09	-- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:08:39.663   23:42:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:39.663   23:42:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:39.663   23:42:09	-- common/autotest_common.sh@10 -- # set +x
00:08:39.663  ************************************
00:08:39.663  START TEST locking_app_on_unlocked_coremask
00:08:39.663  ************************************
00:08:39.663   23:42:10	-- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask
00:08:39.663   23:42:10	-- event/cpu_locks.sh@98 -- # spdk_tgt_pid=105288
00:08:39.663   23:42:10	-- event/cpu_locks.sh@99 -- # waitforlisten 105288 /var/tmp/spdk.sock
00:08:39.663   23:42:10	-- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:08:39.663   23:42:10	-- common/autotest_common.sh@829 -- # '[' -z 105288 ']'
00:08:39.663   23:42:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:39.663   23:42:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:39.663  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:39.663   23:42:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:39.663   23:42:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:39.663   23:42:10	-- common/autotest_common.sh@10 -- # set +x
00:08:39.663  [2024-12-13 23:42:10.082340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:39.663  [2024-12-13 23:42:10.082570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105288 ]
00:08:39.663  [2024-12-13 23:42:10.248883] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:39.663  [2024-12-13 23:42:10.248958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:39.921  [2024-12-13 23:42:10.434091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:39.921  [2024-12-13 23:42:10.434360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:41.298   23:42:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:41.298   23:42:11	-- common/autotest_common.sh@862 -- # return 0
00:08:41.298   23:42:11	-- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=105311
00:08:41.298   23:42:11	-- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:41.298   23:42:11	-- event/cpu_locks.sh@103 -- # waitforlisten 105311 /var/tmp/spdk2.sock
00:08:41.298   23:42:11	-- common/autotest_common.sh@829 -- # '[' -z 105311 ']'
00:08:41.298   23:42:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:41.298  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:41.298   23:42:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:41.298   23:42:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:41.298   23:42:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:41.298   23:42:11	-- common/autotest_common.sh@10 -- # set +x
00:08:41.298  [2024-12-13 23:42:11.700623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:41.298  [2024-12-13 23:42:11.700794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105311 ]
00:08:41.298  [2024-12-13 23:42:11.849863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:41.557  [2024-12-13 23:42:12.240819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:41.557  [2024-12-13 23:42:12.241048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:43.460   23:42:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:43.460   23:42:13	-- common/autotest_common.sh@862 -- # return 0
00:08:43.460   23:42:13	-- event/cpu_locks.sh@105 -- # locks_exist 105311
00:08:43.460   23:42:13	-- event/cpu_locks.sh@22 -- # lslocks -p 105311
00:08:43.460   23:42:13	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:43.719   23:42:14	-- event/cpu_locks.sh@107 -- # killprocess 105288
00:08:43.719   23:42:14	-- common/autotest_common.sh@936 -- # '[' -z 105288 ']'
00:08:43.719   23:42:14	-- common/autotest_common.sh@940 -- # kill -0 105288
00:08:43.719    23:42:14	-- common/autotest_common.sh@941 -- # uname
00:08:43.719   23:42:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:43.719    23:42:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105288
00:08:43.719   23:42:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:43.719   23:42:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:43.719  killing process with pid 105288
00:08:43.719   23:42:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105288'
00:08:43.719   23:42:14	-- common/autotest_common.sh@955 -- # kill 105288
00:08:43.719   23:42:14	-- common/autotest_common.sh@960 -- # wait 105288
00:08:47.908   23:42:18	-- event/cpu_locks.sh@108 -- # killprocess 105311
00:08:47.908   23:42:18	-- common/autotest_common.sh@936 -- # '[' -z 105311 ']'
00:08:47.908   23:42:18	-- common/autotest_common.sh@940 -- # kill -0 105311
00:08:47.908    23:42:18	-- common/autotest_common.sh@941 -- # uname
00:08:47.908   23:42:18	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:47.908    23:42:18	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105311
00:08:47.908   23:42:18	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:47.908  killing process with pid 105311
00:08:47.908   23:42:18	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:47.909   23:42:18	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105311'
00:08:47.909   23:42:18	-- common/autotest_common.sh@955 -- # kill 105311
00:08:47.909   23:42:18	-- common/autotest_common.sh@960 -- # wait 105311
00:08:49.812  
00:08:49.812  real	0m10.214s
00:08:49.812  user	0m10.693s
00:08:49.812  sys	0m1.400s
00:08:49.812  ************************************
00:08:49.812  END TEST locking_app_on_unlocked_coremask
00:08:49.812  ************************************
00:08:49.812   23:42:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:49.812   23:42:20	-- common/autotest_common.sh@10 -- # set +x
00:08:49.812   23:42:20	-- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:08:49.812   23:42:20	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:49.812   23:42:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:49.812   23:42:20	-- common/autotest_common.sh@10 -- # set +x
00:08:49.812  ************************************
00:08:49.812  START TEST locking_app_on_locked_coremask
00:08:49.812  ************************************
00:08:49.812   23:42:20	-- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask
00:08:49.812   23:42:20	-- event/cpu_locks.sh@115 -- # spdk_tgt_pid=105458
00:08:49.812   23:42:20	-- event/cpu_locks.sh@116 -- # waitforlisten 105458 /var/tmp/spdk.sock
00:08:49.812   23:42:20	-- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:49.812   23:42:20	-- common/autotest_common.sh@829 -- # '[' -z 105458 ']'
00:08:49.812   23:42:20	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:49.812   23:42:20	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:49.812   23:42:20	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:49.812  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:49.812   23:42:20	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:49.812   23:42:20	-- common/autotest_common.sh@10 -- # set +x
00:08:49.812  [2024-12-13 23:42:20.356844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:49.812  [2024-12-13 23:42:20.357334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105458 ]
00:08:49.812  [2024-12-13 23:42:20.534896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:50.071  [2024-12-13 23:42:20.718909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:50.071  [2024-12-13 23:42:20.719149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:51.447   23:42:22	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:51.447   23:42:22	-- common/autotest_common.sh@862 -- # return 0
00:08:51.447   23:42:22	-- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=105493
00:08:51.447   23:42:22	-- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:51.447   23:42:22	-- event/cpu_locks.sh@120 -- # NOT waitforlisten 105493 /var/tmp/spdk2.sock
00:08:51.447   23:42:22	-- common/autotest_common.sh@650 -- # local es=0
00:08:51.447   23:42:22	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 105493 /var/tmp/spdk2.sock
00:08:51.447   23:42:22	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:51.447   23:42:22	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:51.447    23:42:22	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:51.447   23:42:22	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:51.447   23:42:22	-- common/autotest_common.sh@653 -- # waitforlisten 105493 /var/tmp/spdk2.sock
00:08:51.447   23:42:22	-- common/autotest_common.sh@829 -- # '[' -z 105493 ']'
00:08:51.447   23:42:22	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:51.447   23:42:22	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:51.447   23:42:22	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:51.447  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:51.447   23:42:22	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:51.447   23:42:22	-- common/autotest_common.sh@10 -- # set +x
00:08:51.447  [2024-12-13 23:42:22.074513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:51.447  [2024-12-13 23:42:22.074708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105493 ]
00:08:51.704  [2024-12-13 23:42:22.225311] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 105458 has claimed it.
00:08:51.704  [2024-12-13 23:42:22.225403] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:52.271  ERROR: process (pid: 105493) is no longer running
00:08:52.271  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (105493) - No such process
00:08:52.271   23:42:22	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:52.271   23:42:22	-- common/autotest_common.sh@862 -- # return 1
00:08:52.271   23:42:22	-- common/autotest_common.sh@653 -- # es=1
00:08:52.271   23:42:22	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:52.271   23:42:22	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:52.271   23:42:22	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:52.271   23:42:22	-- event/cpu_locks.sh@122 -- # locks_exist 105458
00:08:52.271   23:42:22	-- event/cpu_locks.sh@22 -- # lslocks -p 105458
00:08:52.271   23:42:22	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:52.529   23:42:23	-- event/cpu_locks.sh@124 -- # killprocess 105458
00:08:52.529   23:42:23	-- common/autotest_common.sh@936 -- # '[' -z 105458 ']'
00:08:52.529   23:42:23	-- common/autotest_common.sh@940 -- # kill -0 105458
00:08:52.529    23:42:23	-- common/autotest_common.sh@941 -- # uname
00:08:52.529   23:42:23	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:52.529    23:42:23	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105458
00:08:52.529   23:42:23	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:52.529   23:42:23	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:52.529  killing process with pid 105458
00:08:52.529   23:42:23	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105458'
00:08:52.529   23:42:23	-- common/autotest_common.sh@955 -- # kill 105458
00:08:52.529   23:42:23	-- common/autotest_common.sh@960 -- # wait 105458
00:08:54.460  
00:08:54.460  real	0m4.698s
00:08:54.460  user	0m5.019s
00:08:54.460  sys	0m0.884s
00:08:54.460  ************************************
00:08:54.460  END TEST locking_app_on_locked_coremask
00:08:54.460  ************************************
00:08:54.460   23:42:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:54.460   23:42:24	-- common/autotest_common.sh@10 -- # set +x
00:08:54.460   23:42:25	-- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:08:54.460   23:42:25	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:54.460   23:42:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:54.460   23:42:25	-- common/autotest_common.sh@10 -- # set +x
00:08:54.460  ************************************
00:08:54.460  START TEST locking_overlapped_coremask
00:08:54.460  ************************************
00:08:54.460   23:42:25	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask
00:08:54.460   23:42:25	-- event/cpu_locks.sh@132 -- # spdk_tgt_pid=105562
00:08:54.460   23:42:25	-- event/cpu_locks.sh@133 -- # waitforlisten 105562 /var/tmp/spdk.sock
00:08:54.460   23:42:25	-- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:08:54.460   23:42:25	-- common/autotest_common.sh@829 -- # '[' -z 105562 ']'
00:08:54.460   23:42:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:54.460   23:42:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:54.460   23:42:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:54.460  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:54.460   23:42:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:54.460   23:42:25	-- common/autotest_common.sh@10 -- # set +x
00:08:54.460  [2024-12-13 23:42:25.107532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:54.461  [2024-12-13 23:42:25.107755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105562 ]
00:08:54.724  [2024-12-13 23:42:25.287553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:54.984  [2024-12-13 23:42:25.471442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:54.984  [2024-12-13 23:42:25.471808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:54.984  [2024-12-13 23:42:25.471989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:54.984  [2024-12-13 23:42:25.471975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:56.358   23:42:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:56.358   23:42:26	-- common/autotest_common.sh@862 -- # return 0
00:08:56.358   23:42:26	-- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=105587
00:08:56.358   23:42:26	-- event/cpu_locks.sh@137 -- # NOT waitforlisten 105587 /var/tmp/spdk2.sock
00:08:56.358   23:42:26	-- common/autotest_common.sh@650 -- # local es=0
00:08:56.358   23:42:26	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 105587 /var/tmp/spdk2.sock
00:08:56.358   23:42:26	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:56.358   23:42:26	-- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:08:56.358   23:42:26	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:56.358    23:42:26	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:56.358   23:42:26	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:56.358   23:42:26	-- common/autotest_common.sh@653 -- # waitforlisten 105587 /var/tmp/spdk2.sock
00:08:56.358   23:42:26	-- common/autotest_common.sh@829 -- # '[' -z 105587 ']'
00:08:56.358   23:42:26	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:56.358   23:42:26	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:56.358   23:42:26	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:56.358  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:56.358   23:42:26	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:56.358   23:42:26	-- common/autotest_common.sh@10 -- # set +x
00:08:56.358  [2024-12-13 23:42:26.810412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:56.358  [2024-12-13 23:42:26.810600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105587 ]
00:08:56.358  [2024-12-13 23:42:26.988144] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105562 has claimed it.
00:08:56.358  [2024-12-13 23:42:26.988242] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:56.925  ERROR: process (pid: 105587) is no longer running
00:08:56.925  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (105587) - No such process
00:08:56.925   23:42:27	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:56.925   23:42:27	-- common/autotest_common.sh@862 -- # return 1
00:08:56.925   23:42:27	-- common/autotest_common.sh@653 -- # es=1
00:08:56.925   23:42:27	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:56.925   23:42:27	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:56.925   23:42:27	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:56.925   23:42:27	-- event/cpu_locks.sh@139 -- # check_remaining_locks
00:08:56.925   23:42:27	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:56.925   23:42:27	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:56.925   23:42:27	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:56.925   23:42:27	-- event/cpu_locks.sh@141 -- # killprocess 105562
00:08:56.925   23:42:27	-- common/autotest_common.sh@936 -- # '[' -z 105562 ']'
00:08:56.925   23:42:27	-- common/autotest_common.sh@940 -- # kill -0 105562
00:08:56.925    23:42:27	-- common/autotest_common.sh@941 -- # uname
00:08:56.925   23:42:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:56.925    23:42:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105562
00:08:56.925   23:42:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:56.925  killing process with pid 105562
00:08:56.925   23:42:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:56.925   23:42:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105562'
00:08:56.925   23:42:27	-- common/autotest_common.sh@955 -- # kill 105562
00:08:56.925   23:42:27	-- common/autotest_common.sh@960 -- # wait 105562
00:08:58.827  
00:08:58.827  real	0m4.522s
00:08:58.827  user	0m12.183s
00:08:58.827  sys	0m0.711s
00:08:58.827  ************************************
00:08:58.827  END TEST locking_overlapped_coremask
00:08:58.827  ************************************
00:08:58.827   23:42:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:58.827   23:42:29	-- common/autotest_common.sh@10 -- # set +x
00:08:59.085   23:42:29	-- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:08:59.085   23:42:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:59.085   23:42:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:59.085   23:42:29	-- common/autotest_common.sh@10 -- # set +x
00:08:59.085  ************************************
00:08:59.085  START TEST locking_overlapped_coremask_via_rpc
00:08:59.085  ************************************
00:08:59.085   23:42:29	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc
00:08:59.085   23:42:29	-- event/cpu_locks.sh@148 -- # spdk_tgt_pid=105658
00:08:59.085   23:42:29	-- event/cpu_locks.sh@149 -- # waitforlisten 105658 /var/tmp/spdk.sock
00:08:59.085   23:42:29	-- common/autotest_common.sh@829 -- # '[' -z 105658 ']'
00:08:59.085   23:42:29	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:59.085   23:42:29	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:59.085   23:42:29	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:59.085  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:59.085   23:42:29	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:59.085   23:42:29	-- common/autotest_common.sh@10 -- # set +x
00:08:59.085   23:42:29	-- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:08:59.085  [2024-12-13 23:42:29.682906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:59.085  [2024-12-13 23:42:29.683326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105658 ]
00:08:59.343  [2024-12-13 23:42:29.861535] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:59.343  [2024-12-13 23:42:29.861617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:59.343  [2024-12-13 23:42:30.042579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:59.343  [2024-12-13 23:42:30.042980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:59.343  [2024-12-13 23:42:30.043136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:59.343  [2024-12-13 23:42:30.043133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:00.718   23:42:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:00.718   23:42:31	-- common/autotest_common.sh@862 -- # return 0
00:09:00.718   23:42:31	-- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=105695
00:09:00.718   23:42:31	-- event/cpu_locks.sh@153 -- # waitforlisten 105695 /var/tmp/spdk2.sock
00:09:00.718   23:42:31	-- common/autotest_common.sh@829 -- # '[' -z 105695 ']'
00:09:00.719   23:42:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:00.719   23:42:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:00.719   23:42:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:00.719  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:00.719   23:42:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:00.719   23:42:31	-- common/autotest_common.sh@10 -- # set +x
00:09:00.719   23:42:31	-- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:09:00.719  [2024-12-13 23:42:31.428463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:00.719  [2024-12-13 23:42:31.428662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105695 ]
00:09:00.977  [2024-12-13 23:42:31.615650] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:00.977  [2024-12-13 23:42:31.615741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:01.545  [2024-12-13 23:42:32.042559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:01.545  [2024-12-13 23:42:32.043745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:01.545  [2024-12-13 23:42:32.043885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:01.545  [2024-12-13 23:42:32.043888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:09:03.451   23:42:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:03.451   23:42:33	-- common/autotest_common.sh@862 -- # return 0
00:09:03.451   23:42:33	-- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:09:03.451   23:42:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:03.451   23:42:33	-- common/autotest_common.sh@10 -- # set +x
00:09:03.451   23:42:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:03.451   23:42:33	-- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:03.451   23:42:33	-- common/autotest_common.sh@650 -- # local es=0
00:09:03.451   23:42:33	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:03.451   23:42:33	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:09:03.451   23:42:33	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:03.451    23:42:33	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:09:03.451   23:42:33	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:03.451   23:42:33	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:03.451   23:42:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:03.451   23:42:33	-- common/autotest_common.sh@10 -- # set +x
00:09:03.451  [2024-12-13 23:42:33.845832] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105658 has claimed it.
00:09:03.451  request:
00:09:03.451  {
00:09:03.451  "method": "framework_enable_cpumask_locks",
00:09:03.451  "req_id": 1
00:09:03.451  }
00:09:03.451  Got JSON-RPC error response
00:09:03.451  response:
00:09:03.451  {
00:09:03.451  "code": -32603,
00:09:03.451  "message": "Failed to claim CPU core: 2"
00:09:03.451  }
00:09:03.451   23:42:33	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:09:03.451   23:42:33	-- common/autotest_common.sh@653 -- # es=1
00:09:03.451   23:42:33	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:03.451   23:42:33	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:03.451   23:42:33	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:03.451   23:42:33	-- event/cpu_locks.sh@158 -- # waitforlisten 105658 /var/tmp/spdk.sock
00:09:03.451   23:42:33	-- common/autotest_common.sh@829 -- # '[' -z 105658 ']'
00:09:03.451   23:42:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:03.451   23:42:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:03.451  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:03.451   23:42:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:03.451   23:42:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:03.451   23:42:33	-- common/autotest_common.sh@10 -- # set +x
00:09:03.451   23:42:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:03.451   23:42:34	-- common/autotest_common.sh@862 -- # return 0
00:09:03.451   23:42:34	-- event/cpu_locks.sh@159 -- # waitforlisten 105695 /var/tmp/spdk2.sock
00:09:03.451   23:42:34	-- common/autotest_common.sh@829 -- # '[' -z 105695 ']'
00:09:03.451   23:42:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:03.451   23:42:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:03.451  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:03.451   23:42:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:03.451   23:42:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:03.451   23:42:34	-- common/autotest_common.sh@10 -- # set +x
00:09:03.710   23:42:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:03.710   23:42:34	-- common/autotest_common.sh@862 -- # return 0
00:09:03.710   23:42:34	-- event/cpu_locks.sh@161 -- # check_remaining_locks
00:09:03.710   23:42:34	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:03.710   23:42:34	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:03.710   23:42:34	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:03.710  
00:09:03.710  real	0m4.659s
00:09:03.710  user	0m1.803s
00:09:03.710  sys	0m0.295s
00:09:03.710   23:42:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:03.710   23:42:34	-- common/autotest_common.sh@10 -- # set +x
00:09:03.710  ************************************
00:09:03.710  END TEST locking_overlapped_coremask_via_rpc
00:09:03.710  ************************************
00:09:03.710   23:42:34	-- event/cpu_locks.sh@174 -- # cleanup
00:09:03.710   23:42:34	-- event/cpu_locks.sh@15 -- # [[ -z 105658 ]]
00:09:03.710   23:42:34	-- event/cpu_locks.sh@15 -- # killprocess 105658
00:09:03.710   23:42:34	-- common/autotest_common.sh@936 -- # '[' -z 105658 ']'
00:09:03.710   23:42:34	-- common/autotest_common.sh@940 -- # kill -0 105658
00:09:03.710    23:42:34	-- common/autotest_common.sh@941 -- # uname
00:09:03.710   23:42:34	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:03.710    23:42:34	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105658
00:09:03.710   23:42:34	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:03.710   23:42:34	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:03.710  killing process with pid 105658
00:09:03.710   23:42:34	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105658'
00:09:03.710   23:42:34	-- common/autotest_common.sh@955 -- # kill 105658
00:09:03.710   23:42:34	-- common/autotest_common.sh@960 -- # wait 105658
00:09:06.276   23:42:36	-- event/cpu_locks.sh@16 -- # [[ -z 105695 ]]
00:09:06.276   23:42:36	-- event/cpu_locks.sh@16 -- # killprocess 105695
00:09:06.276   23:42:36	-- common/autotest_common.sh@936 -- # '[' -z 105695 ']'
00:09:06.276   23:42:36	-- common/autotest_common.sh@940 -- # kill -0 105695
00:09:06.276    23:42:36	-- common/autotest_common.sh@941 -- # uname
00:09:06.276   23:42:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:06.276    23:42:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105695
00:09:06.276   23:42:36	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:09:06.276   23:42:36	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:09:06.276  killing process with pid 105695
00:09:06.276   23:42:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 105695'
00:09:06.276   23:42:36	-- common/autotest_common.sh@955 -- # kill 105695
00:09:06.276   23:42:36	-- common/autotest_common.sh@960 -- # wait 105695
00:09:08.178   23:42:38	-- event/cpu_locks.sh@18 -- # rm -f
00:09:08.178   23:42:38	-- event/cpu_locks.sh@1 -- # cleanup
00:09:08.178   23:42:38	-- event/cpu_locks.sh@15 -- # [[ -z 105658 ]]
00:09:08.178   23:42:38	-- event/cpu_locks.sh@15 -- # killprocess 105658
00:09:08.178   23:42:38	-- common/autotest_common.sh@936 -- # '[' -z 105658 ']'
00:09:08.178   23:42:38	-- common/autotest_common.sh@940 -- # kill -0 105658
00:09:08.178  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (105658) - No such process
00:09:08.178  Process with pid 105658 is not found
00:09:08.178   23:42:38	-- common/autotest_common.sh@963 -- # echo 'Process with pid 105658 is not found'
00:09:08.178   23:42:38	-- event/cpu_locks.sh@16 -- # [[ -z 105695 ]]
00:09:08.178   23:42:38	-- event/cpu_locks.sh@16 -- # killprocess 105695
00:09:08.178   23:42:38	-- common/autotest_common.sh@936 -- # '[' -z 105695 ']'
00:09:08.178   23:42:38	-- common/autotest_common.sh@940 -- # kill -0 105695
00:09:08.178  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (105695) - No such process
00:09:08.178  Process with pid 105695 is not found
00:09:08.178   23:42:38	-- common/autotest_common.sh@963 -- # echo 'Process with pid 105695 is not found'
00:09:08.178   23:42:38	-- event/cpu_locks.sh@18 -- # rm -f
00:09:08.178  
00:09:08.178  real	0m47.373s
00:09:08.178  user	1m21.936s
00:09:08.178  sys	0m7.358s
00:09:08.178   23:42:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:08.178   23:42:38	-- common/autotest_common.sh@10 -- # set +x
00:09:08.178  ************************************
00:09:08.178  END TEST cpu_locks
00:09:08.178  ************************************
00:09:08.178  
00:09:08.178  real	1m19.478s
00:09:08.178  user	2m22.958s
00:09:08.178  sys	0m11.422s
00:09:08.178   23:42:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:08.179   23:42:38	-- common/autotest_common.sh@10 -- # set +x
00:09:08.179  ************************************
00:09:08.179  END TEST event
00:09:08.179  ************************************
00:09:08.179   23:42:38	-- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:08.179   23:42:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:08.179   23:42:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:08.179   23:42:38	-- common/autotest_common.sh@10 -- # set +x
00:09:08.179  ************************************
00:09:08.179  START TEST thread
00:09:08.179  ************************************
00:09:08.179   23:42:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:08.179  * Looking for test storage...
00:09:08.179  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:09:08.179    23:42:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:08.179     23:42:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:08.179     23:42:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:08.179    23:42:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:08.179    23:42:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:08.179    23:42:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:08.179    23:42:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:08.179    23:42:38	-- scripts/common.sh@335 -- # IFS=.-:
00:09:08.179    23:42:38	-- scripts/common.sh@335 -- # read -ra ver1
00:09:08.179    23:42:38	-- scripts/common.sh@336 -- # IFS=.-:
00:09:08.179    23:42:38	-- scripts/common.sh@336 -- # read -ra ver2
00:09:08.179    23:42:38	-- scripts/common.sh@337 -- # local 'op=<'
00:09:08.179    23:42:38	-- scripts/common.sh@339 -- # ver1_l=2
00:09:08.179    23:42:38	-- scripts/common.sh@340 -- # ver2_l=1
00:09:08.179    23:42:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:08.179    23:42:38	-- scripts/common.sh@343 -- # case "$op" in
00:09:08.179    23:42:38	-- scripts/common.sh@344 -- # : 1
00:09:08.179    23:42:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:08.179    23:42:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:08.179     23:42:38	-- scripts/common.sh@364 -- # decimal 1
00:09:08.179     23:42:38	-- scripts/common.sh@352 -- # local d=1
00:09:08.179     23:42:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:08.179     23:42:38	-- scripts/common.sh@354 -- # echo 1
00:09:08.179    23:42:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:08.179     23:42:38	-- scripts/common.sh@365 -- # decimal 2
00:09:08.179     23:42:38	-- scripts/common.sh@352 -- # local d=2
00:09:08.179     23:42:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:08.179     23:42:38	-- scripts/common.sh@354 -- # echo 2
00:09:08.179    23:42:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:08.179    23:42:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:08.179    23:42:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:08.179    23:42:38	-- scripts/common.sh@367 -- # return 0
00:09:08.179    23:42:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:08.179    23:42:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:08.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.179  		--rc genhtml_branch_coverage=1
00:09:08.179  		--rc genhtml_function_coverage=1
00:09:08.179  		--rc genhtml_legend=1
00:09:08.179  		--rc geninfo_all_blocks=1
00:09:08.179  		--rc geninfo_unexecuted_blocks=1
00:09:08.179  		
00:09:08.179  		'
00:09:08.179    23:42:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:08.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.179  		--rc genhtml_branch_coverage=1
00:09:08.179  		--rc genhtml_function_coverage=1
00:09:08.179  		--rc genhtml_legend=1
00:09:08.179  		--rc geninfo_all_blocks=1
00:09:08.179  		--rc geninfo_unexecuted_blocks=1
00:09:08.179  		
00:09:08.179  		'
00:09:08.179    23:42:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:08.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.179  		--rc genhtml_branch_coverage=1
00:09:08.179  		--rc genhtml_function_coverage=1
00:09:08.179  		--rc genhtml_legend=1
00:09:08.179  		--rc geninfo_all_blocks=1
00:09:08.179  		--rc geninfo_unexecuted_blocks=1
00:09:08.179  		
00:09:08.179  		'
00:09:08.179    23:42:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:08.179  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.179  		--rc genhtml_branch_coverage=1
00:09:08.179  		--rc genhtml_function_coverage=1
00:09:08.179  		--rc genhtml_legend=1
00:09:08.179  		--rc geninfo_all_blocks=1
00:09:08.179  		--rc geninfo_unexecuted_blocks=1
00:09:08.179  		
00:09:08.179  		'
00:09:08.179   23:42:38	-- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:08.179   23:42:38	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:09:08.179   23:42:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:08.179   23:42:38	-- common/autotest_common.sh@10 -- # set +x
00:09:08.179  ************************************
00:09:08.179  START TEST thread_poller_perf
00:09:08.179  ************************************
00:09:08.179   23:42:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:08.179  [2024-12-13 23:42:38.871628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:08.179  [2024-12-13 23:42:38.871972] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105895 ]
00:09:08.437  [2024-12-13 23:42:39.047840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:08.696  [2024-12-13 23:42:39.302603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:08.696  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:09:10.073  
[2024-12-13T23:42:40.805Z]  ======================================
00:09:10.073  
[2024-12-13T23:42:40.805Z]  busy:2209646108 (cyc)
00:09:10.073  
[2024-12-13T23:42:40.805Z]  total_run_count: 376000
00:09:10.073  
[2024-12-13T23:42:40.805Z]  tsc_hz: 2200000000 (cyc)
00:09:10.073  
[2024-12-13T23:42:40.805Z]  ======================================
00:09:10.073  
[2024-12-13T23:42:40.805Z]  poller_cost: 5876 (cyc), 2670 (nsec)
00:09:10.073  
00:09:10.073  real	0m1.839s
00:09:10.073  user	0m1.577s
00:09:10.073  sys	0m0.162s
00:09:10.073   23:42:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:10.073   23:42:40	-- common/autotest_common.sh@10 -- # set +x
00:09:10.073  ************************************
00:09:10.073  END TEST thread_poller_perf
00:09:10.073  ************************************
00:09:10.073   23:42:40	-- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:10.073   23:42:40	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:09:10.073   23:42:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:10.073   23:42:40	-- common/autotest_common.sh@10 -- # set +x
00:09:10.073  ************************************
00:09:10.073  START TEST thread_poller_perf
00:09:10.073  ************************************
00:09:10.073   23:42:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:10.073  [2024-12-13 23:42:40.757790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:10.073  [2024-12-13 23:42:40.757990] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105942 ]
00:09:10.331  [2024-12-13 23:42:40.925710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:10.591  [2024-12-13 23:42:41.131882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:10.591  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:09:11.968  
[2024-12-13T23:42:42.700Z]  ======================================
00:09:11.968  
[2024-12-13T23:42:42.700Z]  busy:2207700702 (cyc)
00:09:11.968  
[2024-12-13T23:42:42.700Z]  total_run_count: 4909000
00:09:11.968  
[2024-12-13T23:42:42.700Z]  tsc_hz: 2200000000 (cyc)
00:09:11.968  
[2024-12-13T23:42:42.700Z]  ======================================
00:09:11.968  
[2024-12-13T23:42:42.700Z]  poller_cost: 449 (cyc), 204 (nsec)
00:09:11.968  
00:09:11.968  real	0m1.775s
00:09:11.968  user	0m1.526s
00:09:11.968  sys	0m0.148s
00:09:11.968   23:42:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:11.968   23:42:42	-- common/autotest_common.sh@10 -- # set +x
00:09:11.968  ************************************
00:09:11.968  END TEST thread_poller_perf
00:09:11.968  ************************************
00:09:11.968   23:42:42	-- thread/thread.sh@17 -- # [[ n != \y ]]
00:09:11.968   23:42:42	-- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:09:11.968   23:42:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:11.968   23:42:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:11.968   23:42:42	-- common/autotest_common.sh@10 -- # set +x
00:09:11.968  ************************************
00:09:11.968  START TEST thread_spdk_lock
00:09:11.968  ************************************
00:09:11.968   23:42:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:09:11.968  [2024-12-13 23:42:42.590059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:11.968  [2024-12-13 23:42:42.590398] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105990 ]
00:09:12.226  [2024-12-13 23:42:42.764263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:12.226  [2024-12-13 23:42:42.954636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:12.227  [2024-12-13 23:42:42.954640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:12.794  [2024-12-13 23:42:43.471499] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:12.794  [2024-12-13 23:42:43.471635] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:09:12.794  [2024-12-13 23:42:43.471685] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x55fd0d2b1ac0
00:09:12.794  [2024-12-13 23:42:43.478734] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:12.794  [2024-12-13 23:42:43.478838] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:12.794  [2024-12-13 23:42:43.478884] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:13.362  Starting test contend
00:09:13.362    Worker    Delay  Wait us  Hold us Total us
00:09:13.362         0        3   125566   192590   318156
00:09:13.362         1        5    48497   295915   344413
00:09:13.362  PASS test contend
00:09:13.362  Starting test hold_by_poller
00:09:13.362  PASS test hold_by_poller
00:09:13.362  Starting test hold_by_message
00:09:13.362  PASS test hold_by_message
00:09:13.362  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:09:13.362     100014 assertions passed
00:09:13.362          0 assertions failed
00:09:13.362  
00:09:13.362  real	0m1.279s
00:09:13.362  user	0m1.565s
00:09:13.362  sys	0m0.140s
00:09:13.362   23:42:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:13.362   23:42:43	-- common/autotest_common.sh@10 -- # set +x
00:09:13.362  ************************************
00:09:13.362  END TEST thread_spdk_lock
00:09:13.362  ************************************
00:09:13.362  
00:09:13.362  real	0m5.218s
00:09:13.362  user	0m4.885s
00:09:13.362  sys	0m0.561s
00:09:13.362   23:42:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:13.362   23:42:43	-- common/autotest_common.sh@10 -- # set +x
00:09:13.362  ************************************
00:09:13.362  END TEST thread
00:09:13.362  ************************************
00:09:13.362   23:42:43	-- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:09:13.362   23:42:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:13.362   23:42:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:13.362   23:42:43	-- common/autotest_common.sh@10 -- # set +x
00:09:13.362  ************************************
00:09:13.362  START TEST accel
00:09:13.362  ************************************
00:09:13.362   23:42:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:09:13.362  * Looking for test storage...
00:09:13.362  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:09:13.362    23:42:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:13.362     23:42:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:13.362     23:42:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:13.362    23:42:44	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:13.362    23:42:44	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:13.362    23:42:44	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:13.362    23:42:44	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:13.362    23:42:44	-- scripts/common.sh@335 -- # IFS=.-:
00:09:13.362    23:42:44	-- scripts/common.sh@335 -- # read -ra ver1
00:09:13.362    23:42:44	-- scripts/common.sh@336 -- # IFS=.-:
00:09:13.362    23:42:44	-- scripts/common.sh@336 -- # read -ra ver2
00:09:13.362    23:42:44	-- scripts/common.sh@337 -- # local 'op=<'
00:09:13.362    23:42:44	-- scripts/common.sh@339 -- # ver1_l=2
00:09:13.362    23:42:44	-- scripts/common.sh@340 -- # ver2_l=1
00:09:13.362    23:42:44	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:13.362    23:42:44	-- scripts/common.sh@343 -- # case "$op" in
00:09:13.362    23:42:44	-- scripts/common.sh@344 -- # : 1
00:09:13.362    23:42:44	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:13.362    23:42:44	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:13.362     23:42:44	-- scripts/common.sh@364 -- # decimal 1
00:09:13.362     23:42:44	-- scripts/common.sh@352 -- # local d=1
00:09:13.362     23:42:44	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:13.362     23:42:44	-- scripts/common.sh@354 -- # echo 1
00:09:13.362    23:42:44	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:13.362     23:42:44	-- scripts/common.sh@365 -- # decimal 2
00:09:13.362     23:42:44	-- scripts/common.sh@352 -- # local d=2
00:09:13.362     23:42:44	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:13.362     23:42:44	-- scripts/common.sh@354 -- # echo 2
00:09:13.362    23:42:44	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:13.362    23:42:44	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:13.362    23:42:44	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:13.362    23:42:44	-- scripts/common.sh@367 -- # return 0
00:09:13.362    23:42:44	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:13.362    23:42:44	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:13.362  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.362  		--rc genhtml_branch_coverage=1
00:09:13.362  		--rc genhtml_function_coverage=1
00:09:13.362  		--rc genhtml_legend=1
00:09:13.362  		--rc geninfo_all_blocks=1
00:09:13.362  		--rc geninfo_unexecuted_blocks=1
00:09:13.362  		
00:09:13.362  		'
00:09:13.362    23:42:44	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:13.362  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.362  		--rc genhtml_branch_coverage=1
00:09:13.362  		--rc genhtml_function_coverage=1
00:09:13.362  		--rc genhtml_legend=1
00:09:13.362  		--rc geninfo_all_blocks=1
00:09:13.362  		--rc geninfo_unexecuted_blocks=1
00:09:13.362  		
00:09:13.362  		'
00:09:13.362    23:42:44	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:13.362  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.362  		--rc genhtml_branch_coverage=1
00:09:13.362  		--rc genhtml_function_coverage=1
00:09:13.362  		--rc genhtml_legend=1
00:09:13.362  		--rc geninfo_all_blocks=1
00:09:13.362  		--rc geninfo_unexecuted_blocks=1
00:09:13.362  		
00:09:13.362  		'
00:09:13.362    23:42:44	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:13.362  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.362  		--rc genhtml_branch_coverage=1
00:09:13.362  		--rc genhtml_function_coverage=1
00:09:13.362  		--rc genhtml_legend=1
00:09:13.362  		--rc geninfo_all_blocks=1
00:09:13.362  		--rc geninfo_unexecuted_blocks=1
00:09:13.362  		
00:09:13.362  		'
00:09:13.362   23:42:44	-- accel/accel.sh@73 -- # declare -A expected_opcs
00:09:13.362   23:42:44	-- accel/accel.sh@74 -- # get_expected_opcs
00:09:13.362   23:42:44	-- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:13.362   23:42:44	-- accel/accel.sh@59 -- # spdk_tgt_pid=106076
00:09:13.362   23:42:44	-- accel/accel.sh@60 -- # waitforlisten 106076
00:09:13.362   23:42:44	-- common/autotest_common.sh@829 -- # '[' -z 106076 ']'
00:09:13.362   23:42:44	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:13.362   23:42:44	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:13.362   23:42:44	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:13.362  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:13.362   23:42:44	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:13.362   23:42:44	-- common/autotest_common.sh@10 -- # set +x
00:09:13.362   23:42:44	-- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63
00:09:13.362    23:42:44	-- accel/accel.sh@58 -- # build_accel_config
00:09:13.362    23:42:44	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:13.362    23:42:44	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:13.362    23:42:44	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:13.362    23:42:44	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:13.362    23:42:44	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:13.362    23:42:44	-- accel/accel.sh@41 -- # local IFS=,
00:09:13.362    23:42:44	-- accel/accel.sh@42 -- # jq -r .
00:09:13.621  [2024-12-13 23:42:44.171461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:13.621  [2024-12-13 23:42:44.171708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106076 ]
00:09:13.621  [2024-12-13 23:42:44.344851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:13.879  [2024-12-13 23:42:44.544212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:13.880  [2024-12-13 23:42:44.544496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:15.257   23:42:45	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:15.257   23:42:45	-- common/autotest_common.sh@862 -- # return 0
00:09:15.257   23:42:45	-- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]"))
00:09:15.257    23:42:45	-- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments
00:09:15.257    23:42:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.257    23:42:45	-- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]'
00:09:15.257    23:42:45	-- common/autotest_common.sh@10 -- # set +x
00:09:15.257    23:42:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # IFS==
00:09:15.257   23:42:45	-- accel/accel.sh@64 -- # read -r opc module
00:09:15.257   23:42:45	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:09:15.257   23:42:45	-- accel/accel.sh@67 -- # killprocess 106076
00:09:15.257   23:42:45	-- common/autotest_common.sh@936 -- # '[' -z 106076 ']'
00:09:15.257   23:42:45	-- common/autotest_common.sh@940 -- # kill -0 106076
00:09:15.257    23:42:45	-- common/autotest_common.sh@941 -- # uname
00:09:15.257   23:42:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:15.257    23:42:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106076
00:09:15.257   23:42:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:15.257   23:42:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:15.257   23:42:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 106076'
00:09:15.257  killing process with pid 106076
00:09:15.257   23:42:45	-- common/autotest_common.sh@955 -- # kill 106076
00:09:15.257   23:42:45	-- common/autotest_common.sh@960 -- # wait 106076
00:09:17.161   23:42:47	-- accel/accel.sh@68 -- # trap - ERR
00:09:17.161   23:42:47	-- accel/accel.sh@81 -- # run_test accel_help accel_perf -h
00:09:17.161   23:42:47	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:09:17.161   23:42:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:17.161   23:42:47	-- common/autotest_common.sh@10 -- # set +x
00:09:17.161   23:42:47	-- common/autotest_common.sh@1114 -- # accel_perf -h
00:09:17.161   23:42:47	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h
00:09:17.161    23:42:47	-- accel/accel.sh@12 -- # build_accel_config
00:09:17.161    23:42:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:17.161    23:42:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:17.161    23:42:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:17.161    23:42:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:17.161    23:42:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:17.161    23:42:47	-- accel/accel.sh@41 -- # local IFS=,
00:09:17.161    23:42:47	-- accel/accel.sh@42 -- # jq -r .
00:09:17.420   23:42:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:17.420   23:42:47	-- common/autotest_common.sh@10 -- # set +x
00:09:17.420   23:42:47	-- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress
00:09:17.420   23:42:47	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:17.420   23:42:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:17.420   23:42:47	-- common/autotest_common.sh@10 -- # set +x
00:09:17.420  ************************************
00:09:17.420  START TEST accel_missing_filename
00:09:17.420  ************************************
00:09:17.420   23:42:47	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress
00:09:17.420   23:42:47	-- common/autotest_common.sh@650 -- # local es=0
00:09:17.420   23:42:47	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress
00:09:17.420   23:42:47	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:09:17.420   23:42:47	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:17.420    23:42:47	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:09:17.420   23:42:47	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:17.420   23:42:47	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress
00:09:17.420   23:42:47	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress
00:09:17.420    23:42:47	-- accel/accel.sh@12 -- # build_accel_config
00:09:17.420    23:42:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:17.420    23:42:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:17.420    23:42:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:17.420    23:42:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:17.420    23:42:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:17.420    23:42:47	-- accel/accel.sh@41 -- # local IFS=,
00:09:17.420    23:42:47	-- accel/accel.sh@42 -- # jq -r .
00:09:17.420  [2024-12-13 23:42:48.034923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:17.420  [2024-12-13 23:42:48.035176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106172 ]
00:09:17.679  [2024-12-13 23:42:48.209430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:17.679  [2024-12-13 23:42:48.394936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.938  [2024-12-13 23:42:48.594997] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:18.507  [2024-12-13 23:42:49.029962] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:09:18.766  A filename is required.
00:09:18.766   23:42:49	-- common/autotest_common.sh@653 -- # es=234
00:09:18.766   23:42:49	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:18.766   23:42:49	-- common/autotest_common.sh@662 -- # es=106
00:09:18.766   23:42:49	-- common/autotest_common.sh@663 -- # case "$es" in
00:09:18.766   23:42:49	-- common/autotest_common.sh@670 -- # es=1
00:09:18.766   23:42:49	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:18.766  
00:09:18.766  real	0m1.403s
00:09:18.766  user	0m1.109s
00:09:18.766  sys	0m0.242s
00:09:18.766   23:42:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:18.766   23:42:49	-- common/autotest_common.sh@10 -- # set +x
00:09:18.766  ************************************
00:09:18.766  END TEST accel_missing_filename
00:09:18.766  ************************************
00:09:18.766   23:42:49	-- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:09:18.766   23:42:49	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:09:18.766   23:42:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:18.766   23:42:49	-- common/autotest_common.sh@10 -- # set +x
00:09:18.766  ************************************
00:09:18.766  START TEST accel_compress_verify
00:09:18.766  ************************************
00:09:18.766   23:42:49	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:09:18.766   23:42:49	-- common/autotest_common.sh@650 -- # local es=0
00:09:18.766   23:42:49	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:09:18.766   23:42:49	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:09:18.766   23:42:49	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:18.766    23:42:49	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:09:18.766   23:42:49	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:18.766   23:42:49	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:09:18.766   23:42:49	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:09:18.766    23:42:49	-- accel/accel.sh@12 -- # build_accel_config
00:09:18.766    23:42:49	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:18.766    23:42:49	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:18.766    23:42:49	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:18.766    23:42:49	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:18.766    23:42:49	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:18.766    23:42:49	-- accel/accel.sh@41 -- # local IFS=,
00:09:18.766    23:42:49	-- accel/accel.sh@42 -- # jq -r .
00:09:18.766  [2024-12-13 23:42:49.493359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:18.766  [2024-12-13 23:42:49.493609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106211 ]
00:09:19.025  [2024-12-13 23:42:49.665647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:19.284  [2024-12-13 23:42:49.863278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.543  [2024-12-13 23:42:50.053356] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:19.802  [2024-12-13 23:42:50.488577] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:09:20.370  
00:09:20.370  Compression does not support the verify option, aborting.
00:09:20.370   23:42:50	-- common/autotest_common.sh@653 -- # es=161
00:09:20.370   23:42:50	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:20.370   23:42:50	-- common/autotest_common.sh@662 -- # es=33
00:09:20.370   23:42:50	-- common/autotest_common.sh@663 -- # case "$es" in
00:09:20.370   23:42:50	-- common/autotest_common.sh@670 -- # es=1
00:09:20.370   23:42:50	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:20.370  
00:09:20.370  real	0m1.416s
00:09:20.370  user	0m1.132s
00:09:20.370  sys	0m0.229s
00:09:20.370   23:42:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:20.370   23:42:50	-- common/autotest_common.sh@10 -- # set +x
00:09:20.370  ************************************
00:09:20.370  END TEST accel_compress_verify
00:09:20.370  ************************************
00:09:20.370   23:42:50	-- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar
00:09:20.370   23:42:50	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:20.370   23:42:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:20.370   23:42:50	-- common/autotest_common.sh@10 -- # set +x
00:09:20.370  ************************************
00:09:20.370  START TEST accel_wrong_workload
00:09:20.370  ************************************
00:09:20.370   23:42:50	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar
00:09:20.370   23:42:50	-- common/autotest_common.sh@650 -- # local es=0
00:09:20.370   23:42:50	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar
00:09:20.370   23:42:50	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:09:20.370   23:42:50	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:20.370    23:42:50	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:09:20.370   23:42:50	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:20.370   23:42:50	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar
00:09:20.370   23:42:50	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar
00:09:20.370    23:42:50	-- accel/accel.sh@12 -- # build_accel_config
00:09:20.370    23:42:50	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:20.370    23:42:50	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:50	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:50	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:50	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:20.370    23:42:50	-- accel/accel.sh@41 -- # local IFS=,
00:09:20.370    23:42:50	-- accel/accel.sh@42 -- # jq -r .
00:09:20.370  Unsupported workload type: foobar
00:09:20.370  [2024-12-13 23:42:50.956282] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1
00:09:20.370  accel_perf options:
00:09:20.370  	[-h help message]
00:09:20.370  	[-q queue depth per core]
00:09:20.370  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:09:20.370  	[-T number of threads per core
00:09:20.370  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:09:20.370  	[-t time in seconds]
00:09:20.370  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:09:20.370  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:09:20.370  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:09:20.370  	[-l for compress/decompress workloads, name of uncompressed input file
00:09:20.370  	[-S for crc32c workload, use this seed value (default 0)
00:09:20.370  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:09:20.370  	[-f for fill workload, use this BYTE value (default 255)
00:09:20.370  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:09:20.370  	[-y verify result if this switch is on]
00:09:20.370  	[-a tasks to allocate per core (default: same value as -q)]
00:09:20.370  		Can be used to spread operations across a wider range of memory.
00:09:20.370   23:42:50	-- common/autotest_common.sh@653 -- # es=1
00:09:20.370   23:42:50	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:20.370   23:42:50	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:20.370   23:42:50	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:20.370  
00:09:20.370  real	0m0.071s
00:09:20.370  user	0m0.085s
00:09:20.370  sys	0m0.041s
00:09:20.370   23:42:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:20.370   23:42:50	-- common/autotest_common.sh@10 -- # set +x
00:09:20.370  ************************************
00:09:20.370  END TEST accel_wrong_workload
00:09:20.370  ************************************
00:09:20.370   23:42:51	-- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1
00:09:20.370   23:42:51	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:09:20.370   23:42:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:20.370   23:42:51	-- common/autotest_common.sh@10 -- # set +x
00:09:20.370  ************************************
00:09:20.370  START TEST accel_negative_buffers
00:09:20.370  ************************************
00:09:20.370   23:42:51	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1
00:09:20.370   23:42:51	-- common/autotest_common.sh@650 -- # local es=0
00:09:20.370   23:42:51	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1
00:09:20.370   23:42:51	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:09:20.370   23:42:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:20.370    23:42:51	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:09:20.370   23:42:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:20.370   23:42:51	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1
00:09:20.370   23:42:51	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1
00:09:20.370    23:42:51	-- accel/accel.sh@12 -- # build_accel_config
00:09:20.370    23:42:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:20.370    23:42:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:20.370    23:42:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:20.370    23:42:51	-- accel/accel.sh@41 -- # local IFS=,
00:09:20.370    23:42:51	-- accel/accel.sh@42 -- # jq -r .
00:09:20.370  -x option must be non-negative.
00:09:20.370  [2024-12-13 23:42:51.075955] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1
00:09:20.370  accel_perf options:
00:09:20.370  	[-h help message]
00:09:20.370  	[-q queue depth per core]
00:09:20.370  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:09:20.370  	[-T number of threads per core
00:09:20.370  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:09:20.370  	[-t time in seconds]
00:09:20.370  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:09:20.370  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:09:20.370  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:09:20.370  	[-l for compress/decompress workloads, name of uncompressed input file
00:09:20.370  	[-S for crc32c workload, use this seed value (default 0)
00:09:20.370  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:09:20.370  	[-f for fill workload, use this BYTE value (default 255)
00:09:20.370  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:09:20.370  	[-y verify result if this switch is on]
00:09:20.370  	[-a tasks to allocate per core (default: same value as -q)]
00:09:20.370  		Can be used to spread operations across a wider range of memory.
00:09:20.629   23:42:51	-- common/autotest_common.sh@653 -- # es=1
00:09:20.629   23:42:51	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:20.629   23:42:51	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:20.629   23:42:51	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:20.629  
00:09:20.629  real	0m0.070s
00:09:20.629  user	0m0.104s
00:09:20.629  sys	0m0.017s
00:09:20.629   23:42:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:20.629  ************************************
00:09:20.629  END TEST accel_negative_buffers
00:09:20.629  ************************************
00:09:20.629   23:42:51	-- common/autotest_common.sh@10 -- # set +x
00:09:20.629   23:42:51	-- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y
00:09:20.629   23:42:51	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:09:20.629   23:42:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:20.629   23:42:51	-- common/autotest_common.sh@10 -- # set +x
00:09:20.629  ************************************
00:09:20.629  START TEST accel_crc32c
00:09:20.629  ************************************
00:09:20.629   23:42:51	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y
00:09:20.629   23:42:51	-- accel/accel.sh@16 -- # local accel_opc
00:09:20.629   23:42:51	-- accel/accel.sh@17 -- # local accel_module
00:09:20.629    23:42:51	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:09:20.629    23:42:51	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:09:20.629     23:42:51	-- accel/accel.sh@12 -- # build_accel_config
00:09:20.629     23:42:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:20.629     23:42:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:20.630     23:42:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:20.630     23:42:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:20.630     23:42:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:20.630     23:42:51	-- accel/accel.sh@41 -- # local IFS=,
00:09:20.630     23:42:51	-- accel/accel.sh@42 -- # jq -r .
00:09:20.630  [2024-12-13 23:42:51.197122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:20.630  [2024-12-13 23:42:51.197305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106306 ]
00:09:20.888  [2024-12-13 23:42:51.368683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:20.888  [2024-12-13 23:42:51.560038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:23.421   23:42:53	-- accel/accel.sh@18 -- # out='
00:09:23.421  SPDK Configuration:
00:09:23.421  Core mask:      0x1
00:09:23.421  
00:09:23.421  Accel Perf Configuration:
00:09:23.421  Workload Type:  crc32c
00:09:23.421  CRC-32C seed:   32
00:09:23.421  Transfer size:  4096 bytes
00:09:23.421  Vector count    1
00:09:23.421  Module:         software
00:09:23.421  Queue depth:    32
00:09:23.421  Allocate depth: 32
00:09:23.421  # threads/core: 1
00:09:23.421  Run time:       1 seconds
00:09:23.421  Verify:         Yes
00:09:23.421  
00:09:23.421  Running for 1 seconds...
00:09:23.421  
00:09:23.421  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:23.421  ------------------------------------------------------------------------------------
00:09:23.421  0,0                      522496/s       2041 MiB/s                0                0
00:09:23.421  ====================================================================================
00:09:23.421  Total                    522496/s       2041 MiB/s                0                0'
00:09:23.421   23:42:53	-- accel/accel.sh@20 -- # IFS=:
00:09:23.421    23:42:53	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:09:23.421   23:42:53	-- accel/accel.sh@20 -- # read -r var val
00:09:23.421    23:42:53	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:09:23.421     23:42:53	-- accel/accel.sh@12 -- # build_accel_config
00:09:23.421     23:42:53	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:23.421     23:42:53	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:23.421     23:42:53	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:23.421     23:42:53	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:23.421     23:42:53	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:23.421     23:42:53	-- accel/accel.sh@41 -- # local IFS=,
00:09:23.421     23:42:53	-- accel/accel.sh@42 -- # jq -r .
00:09:23.421  [2024-12-13 23:42:53.603554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:23.421  [2024-12-13 23:42:53.603753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106341 ]
00:09:23.421  [2024-12-13 23:42:53.772999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:23.421  [2024-12-13 23:42:53.971055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=0x1
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=crc32c
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=32
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=software
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@23 -- # accel_module=software
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=32
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=32
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val=1
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.680   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.680   23:42:54	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:23.680   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.681   23:42:54	-- accel/accel.sh@21 -- # val=Yes
00:09:23.681   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.681   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.681   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:23.681   23:42:54	-- accel/accel.sh@21 -- # val=
00:09:23.681   23:42:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # IFS=:
00:09:23.681   23:42:54	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@21 -- # val=
00:09:25.583   23:42:55	-- accel/accel.sh@22 -- # case "$var" in
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # IFS=:
00:09:25.583   23:42:55	-- accel/accel.sh@20 -- # read -r var val
00:09:25.583   23:42:55	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:25.583   23:42:55	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:09:25.583   23:42:55	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:25.583  
00:09:25.583  real	0m4.823s
00:09:25.583  user	0m4.233s
00:09:25.583  sys	0m0.434s
00:09:25.583   23:42:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:25.583   23:42:55	-- common/autotest_common.sh@10 -- # set +x
00:09:25.583  ************************************
00:09:25.583  END TEST accel_crc32c
00:09:25.583  ************************************
00:09:25.583   23:42:56	-- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2
00:09:25.583   23:42:56	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:09:25.583   23:42:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:25.583   23:42:56	-- common/autotest_common.sh@10 -- # set +x
00:09:25.583  ************************************
00:09:25.583  START TEST accel_crc32c_C2
00:09:25.583  ************************************
00:09:25.583   23:42:56	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2
00:09:25.583   23:42:56	-- accel/accel.sh@16 -- # local accel_opc
00:09:25.583   23:42:56	-- accel/accel.sh@17 -- # local accel_module
00:09:25.583    23:42:56	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2
00:09:25.583    23:42:56	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:09:25.583     23:42:56	-- accel/accel.sh@12 -- # build_accel_config
00:09:25.583     23:42:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:25.583     23:42:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:25.583     23:42:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:25.583     23:42:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:25.583     23:42:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:25.583     23:42:56	-- accel/accel.sh@41 -- # local IFS=,
00:09:25.583     23:42:56	-- accel/accel.sh@42 -- # jq -r .
00:09:25.583  [2024-12-13 23:42:56.071807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:25.583  [2024-12-13 23:42:56.072134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106393 ]
00:09:25.583  [2024-12-13 23:42:56.241698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:25.841  [2024-12-13 23:42:56.430182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.745   23:42:58	-- accel/accel.sh@18 -- # out='
00:09:27.745  SPDK Configuration:
00:09:27.745  Core mask:      0x1
00:09:27.745  
00:09:27.745  Accel Perf Configuration:
00:09:27.745  Workload Type:  crc32c
00:09:27.745  CRC-32C seed:   0
00:09:27.745  Transfer size:  4096 bytes
00:09:27.745  Vector count    2
00:09:27.745  Module:         software
00:09:27.745  Queue depth:    32
00:09:27.745  Allocate depth: 32
00:09:27.745  # threads/core: 1
00:09:27.745  Run time:       1 seconds
00:09:27.745  Verify:         Yes
00:09:27.745  
00:09:27.745  Running for 1 seconds...
00:09:27.745  
00:09:27.745  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:27.745  ------------------------------------------------------------------------------------
00:09:27.745  0,0                      397248/s       3103 MiB/s                0                0
00:09:27.745  ====================================================================================
00:09:27.745  Total                    397248/s       1551 MiB/s                0                0'
00:09:27.745   23:42:58	-- accel/accel.sh@20 -- # IFS=:
00:09:27.745    23:42:58	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2
00:09:27.745   23:42:58	-- accel/accel.sh@20 -- # read -r var val
00:09:27.745    23:42:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:09:27.745     23:42:58	-- accel/accel.sh@12 -- # build_accel_config
00:09:27.745     23:42:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:27.745     23:42:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:27.745     23:42:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:27.745     23:42:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:27.745     23:42:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:27.745     23:42:58	-- accel/accel.sh@41 -- # local IFS=,
00:09:27.745     23:42:58	-- accel/accel.sh@42 -- # jq -r .
00:09:27.745  [2024-12-13 23:42:58.469437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:27.745  [2024-12-13 23:42:58.469726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106432 ]
00:09:28.004  [2024-12-13 23:42:58.637572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:28.263  [2024-12-13 23:42:58.846619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=0x1
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=crc32c
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=0
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=software
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@23 -- # accel_module=software
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=32
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=32
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=1
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=Yes
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:28.523   23:42:59	-- accel/accel.sh@21 -- # val=
00:09:28.523   23:42:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # IFS=:
00:09:28.523   23:42:59	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@21 -- # val=
00:09:30.429   23:43:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # IFS=:
00:09:30.429   23:43:00	-- accel/accel.sh@20 -- # read -r var val
00:09:30.429   23:43:00	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:30.429   23:43:00	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:09:30.429   23:43:00	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:30.429  
00:09:30.429  real	0m4.856s
00:09:30.429  user	0m4.278s
00:09:30.429  sys	0m0.412s
00:09:30.429   23:43:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:30.429   23:43:00	-- common/autotest_common.sh@10 -- # set +x
00:09:30.429  ************************************
00:09:30.429  END TEST accel_crc32c_C2
00:09:30.429  ************************************
00:09:30.429   23:43:00	-- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y
00:09:30.429   23:43:00	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:30.429   23:43:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:30.429   23:43:00	-- common/autotest_common.sh@10 -- # set +x
00:09:30.429  ************************************
00:09:30.429  START TEST accel_copy
00:09:30.429  ************************************
00:09:30.429   23:43:00	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y
00:09:30.429   23:43:00	-- accel/accel.sh@16 -- # local accel_opc
00:09:30.429   23:43:00	-- accel/accel.sh@17 -- # local accel_module
00:09:30.429    23:43:00	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y
00:09:30.429    23:43:00	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:09:30.429     23:43:00	-- accel/accel.sh@12 -- # build_accel_config
00:09:30.429     23:43:00	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:30.429     23:43:00	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:30.429     23:43:00	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:30.429     23:43:00	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:30.429     23:43:00	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:30.429     23:43:00	-- accel/accel.sh@41 -- # local IFS=,
00:09:30.429     23:43:00	-- accel/accel.sh@42 -- # jq -r .
00:09:30.429  [2024-12-13 23:43:00.979121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:30.429  [2024-12-13 23:43:00.979313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106487 ]
00:09:30.429  [2024-12-13 23:43:01.149278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:30.688  [2024-12-13 23:43:01.334333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:32.603   23:43:03	-- accel/accel.sh@18 -- # out='
00:09:32.603  SPDK Configuration:
00:09:32.603  Core mask:      0x1
00:09:32.603  
00:09:32.603  Accel Perf Configuration:
00:09:32.603  Workload Type:  copy
00:09:32.603  Transfer size:  4096 bytes
00:09:32.603  Vector count    1
00:09:32.603  Module:         software
00:09:32.603  Queue depth:    32
00:09:32.603  Allocate depth: 32
00:09:32.603  # threads/core: 1
00:09:32.603  Run time:       1 seconds
00:09:32.603  Verify:         Yes
00:09:32.603  
00:09:32.603  Running for 1 seconds...
00:09:32.603  
00:09:32.603  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:32.603  ------------------------------------------------------------------------------------
00:09:32.603  0,0                      311008/s       1214 MiB/s                0                0
00:09:32.603  ====================================================================================
00:09:32.603  Total                    311008/s       1214 MiB/s                0                0'
00:09:32.603   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:32.603   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:32.603    23:43:03	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y
00:09:32.603    23:43:03	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:09:32.603     23:43:03	-- accel/accel.sh@12 -- # build_accel_config
00:09:32.603     23:43:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:32.603     23:43:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:32.603     23:43:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:32.603     23:43:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:32.603     23:43:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:32.603     23:43:03	-- accel/accel.sh@41 -- # local IFS=,
00:09:32.603     23:43:03	-- accel/accel.sh@42 -- # jq -r .
00:09:32.862  [2024-12-13 23:43:03.365244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:32.862  [2024-12-13 23:43:03.365444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106522 ]
00:09:32.862  [2024-12-13 23:43:03.531295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:33.121  [2024-12-13 23:43:03.748462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=0x1
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=copy
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@24 -- # accel_opc=copy
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=software
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@23 -- # accel_module=software
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=32
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=32
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=1
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=Yes
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:33.381   23:43:03	-- accel/accel.sh@21 -- # val=
00:09:33.381   23:43:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # IFS=:
00:09:33.381   23:43:03	-- accel/accel.sh@20 -- # read -r var val
00:09:35.284   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.284   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.284   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.284   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.284   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.284   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.284   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.284   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.284   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.285   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.285   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.285   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.285   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.285   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.285   23:43:05	-- accel/accel.sh@21 -- # val=
00:09:35.285   23:43:05	-- accel/accel.sh@22 -- # case "$var" in
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # IFS=:
00:09:35.285   23:43:05	-- accel/accel.sh@20 -- # read -r var val
00:09:35.285   23:43:05	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:35.285   23:43:05	-- accel/accel.sh@28 -- # [[ -n copy ]]
00:09:35.285   23:43:05	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:35.285  
00:09:35.285  real	0m4.819s
00:09:35.285  user	0m4.191s
00:09:35.285  sys	0m0.458s
00:09:35.285   23:43:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:35.285  ************************************
00:09:35.285  END TEST accel_copy
00:09:35.285  ************************************
00:09:35.285   23:43:05	-- common/autotest_common.sh@10 -- # set +x
00:09:35.285   23:43:05	-- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:35.285   23:43:05	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:35.285   23:43:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:35.285   23:43:05	-- common/autotest_common.sh@10 -- # set +x
00:09:35.285  ************************************
00:09:35.285  START TEST accel_fill
00:09:35.285  ************************************
00:09:35.285   23:43:05	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:35.285   23:43:05	-- accel/accel.sh@16 -- # local accel_opc
00:09:35.285   23:43:05	-- accel/accel.sh@17 -- # local accel_module
00:09:35.285    23:43:05	-- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:35.285    23:43:05	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:35.285     23:43:05	-- accel/accel.sh@12 -- # build_accel_config
00:09:35.285     23:43:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:35.285     23:43:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:35.285     23:43:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:35.285     23:43:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:35.285     23:43:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:35.285     23:43:05	-- accel/accel.sh@41 -- # local IFS=,
00:09:35.285     23:43:05	-- accel/accel.sh@42 -- # jq -r .
00:09:35.285  [2024-12-13 23:43:05.850766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:35.285  [2024-12-13 23:43:05.850944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106567 ]
00:09:35.285  [2024-12-13 23:43:06.005941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:35.544  [2024-12-13 23:43:06.208841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:37.498   23:43:08	-- accel/accel.sh@18 -- # out='
00:09:37.498  SPDK Configuration:
00:09:37.498  Core mask:      0x1
00:09:37.498  
00:09:37.498  Accel Perf Configuration:
00:09:37.498  Workload Type:  fill
00:09:37.498  Fill pattern:   0x80
00:09:37.498  Transfer size:  4096 bytes
00:09:37.498  Vector count    1
00:09:37.498  Module:         software
00:09:37.498  Queue depth:    64
00:09:37.498  Allocate depth: 64
00:09:37.498  # threads/core: 1
00:09:37.498  Run time:       1 seconds
00:09:37.498  Verify:         Yes
00:09:37.498  
00:09:37.498  Running for 1 seconds...
00:09:37.498  
00:09:37.498  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:37.498  ------------------------------------------------------------------------------------
00:09:37.498  0,0                      471104/s       1840 MiB/s                0                0
00:09:37.498  ====================================================================================
00:09:37.498  Total                    471104/s       1840 MiB/s                0                0'
00:09:37.498   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:37.498    23:43:08	-- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:37.498   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:37.498    23:43:08	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:09:37.498     23:43:08	-- accel/accel.sh@12 -- # build_accel_config
00:09:37.498     23:43:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:37.498     23:43:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:37.498     23:43:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:37.498     23:43:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:37.498     23:43:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:37.498     23:43:08	-- accel/accel.sh@41 -- # local IFS=,
00:09:37.498     23:43:08	-- accel/accel.sh@42 -- # jq -r .
00:09:37.498  [2024-12-13 23:43:08.228649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:37.498  [2024-12-13 23:43:08.228852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106609 ]
00:09:37.757  [2024-12-13 23:43:08.398090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:38.016  [2024-12-13 23:43:08.607371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=0x1
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=fill
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@24 -- # accel_opc=fill
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=0x80
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=software
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@23 -- # accel_module=software
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=64
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=64
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=1
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=Yes
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:38.276   23:43:08	-- accel/accel.sh@21 -- # val=
00:09:38.276   23:43:08	-- accel/accel.sh@22 -- # case "$var" in
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # IFS=:
00:09:38.276   23:43:08	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@21 -- # val=
00:09:40.181   23:43:10	-- accel/accel.sh@22 -- # case "$var" in
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # IFS=:
00:09:40.181   23:43:10	-- accel/accel.sh@20 -- # read -r var val
00:09:40.181   23:43:10	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:40.181   23:43:10	-- accel/accel.sh@28 -- # [[ -n fill ]]
00:09:40.181   23:43:10	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:40.181  
00:09:40.181  real	0m4.801s
00:09:40.181  user	0m4.205s
00:09:40.181  sys	0m0.442s
00:09:40.181   23:43:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:40.181  ************************************
00:09:40.181  END TEST accel_fill
00:09:40.181  ************************************
00:09:40.181   23:43:10	-- common/autotest_common.sh@10 -- # set +x
00:09:40.181   23:43:10	-- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y
00:09:40.181   23:43:10	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:40.181   23:43:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:40.181   23:43:10	-- common/autotest_common.sh@10 -- # set +x
00:09:40.181  ************************************
00:09:40.181  START TEST accel_copy_crc32c
00:09:40.181  ************************************
00:09:40.181   23:43:10	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y
00:09:40.181   23:43:10	-- accel/accel.sh@16 -- # local accel_opc
00:09:40.181   23:43:10	-- accel/accel.sh@17 -- # local accel_module
00:09:40.181    23:43:10	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y
00:09:40.181    23:43:10	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:09:40.181     23:43:10	-- accel/accel.sh@12 -- # build_accel_config
00:09:40.181     23:43:10	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:40.181     23:43:10	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:40.181     23:43:10	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:40.181     23:43:10	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:40.181     23:43:10	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:40.181     23:43:10	-- accel/accel.sh@41 -- # local IFS=,
00:09:40.181     23:43:10	-- accel/accel.sh@42 -- # jq -r .
00:09:40.181  [2024-12-13 23:43:10.708563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:40.181  [2024-12-13 23:43:10.708754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106661 ]
00:09:40.181  [2024-12-13 23:43:10.877956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:40.440  [2024-12-13 23:43:11.084779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.346   23:43:13	-- accel/accel.sh@18 -- # out='
00:09:42.346  SPDK Configuration:
00:09:42.346  Core mask:      0x1
00:09:42.346  
00:09:42.346  Accel Perf Configuration:
00:09:42.346  Workload Type:  copy_crc32c
00:09:42.346  CRC-32C seed:   0
00:09:42.346  Vector size:    4096 bytes
00:09:42.346  Transfer size:  4096 bytes
00:09:42.346  Vector count    1
00:09:42.346  Module:         software
00:09:42.346  Queue depth:    32
00:09:42.346  Allocate depth: 32
00:09:42.346  # threads/core: 1
00:09:42.346  Run time:       1 seconds
00:09:42.346  Verify:         Yes
00:09:42.346  
00:09:42.346  Running for 1 seconds...
00:09:42.346  
00:09:42.346  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:42.346  ------------------------------------------------------------------------------------
00:09:42.346  0,0                      259008/s       1011 MiB/s                0                0
00:09:42.346  ====================================================================================
00:09:42.346  Total                    259008/s       1011 MiB/s                0                0'
00:09:42.346   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:42.346    23:43:13	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y
00:09:42.346   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:42.346    23:43:13	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:09:42.346     23:43:13	-- accel/accel.sh@12 -- # build_accel_config
00:09:42.346     23:43:13	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:42.346     23:43:13	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:42.346     23:43:13	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:42.346     23:43:13	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:42.346     23:43:13	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:42.346     23:43:13	-- accel/accel.sh@41 -- # local IFS=,
00:09:42.346     23:43:13	-- accel/accel.sh@42 -- # jq -r .
00:09:42.605  [2024-12-13 23:43:13.110249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:42.605  [2024-12-13 23:43:13.110596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106703 ]
00:09:42.605  [2024-12-13 23:43:13.281327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:42.864  [2024-12-13 23:43:13.489744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=0x1
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.123   23:43:13	-- accel/accel.sh@21 -- # val=copy_crc32c
00:09:43.123   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.123   23:43:13	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:09:43.123   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=0
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=software
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@23 -- # accel_module=software
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=32
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=32
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=1
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=Yes
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:43.124   23:43:13	-- accel/accel.sh@21 -- # val=
00:09:43.124   23:43:13	-- accel/accel.sh@22 -- # case "$var" in
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # IFS=:
00:09:43.124   23:43:13	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@21 -- # val=
00:09:45.030   23:43:15	-- accel/accel.sh@22 -- # case "$var" in
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # IFS=:
00:09:45.030   23:43:15	-- accel/accel.sh@20 -- # read -r var val
00:09:45.030   23:43:15	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:45.030   23:43:15	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:09:45.030   23:43:15	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:45.030  
00:09:45.030  real	0m4.829s
00:09:45.030  user	0m4.216s
00:09:45.030  sys	0m0.451s
00:09:45.030   23:43:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:45.030   23:43:15	-- common/autotest_common.sh@10 -- # set +x
00:09:45.030  ************************************
00:09:45.030  END TEST accel_copy_crc32c
00:09:45.030  ************************************
00:09:45.030   23:43:15	-- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2
00:09:45.030   23:43:15	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:09:45.030   23:43:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:45.030   23:43:15	-- common/autotest_common.sh@10 -- # set +x
00:09:45.030  ************************************
00:09:45.030  START TEST accel_copy_crc32c_C2
00:09:45.030  ************************************
00:09:45.030   23:43:15	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2
00:09:45.030   23:43:15	-- accel/accel.sh@16 -- # local accel_opc
00:09:45.030   23:43:15	-- accel/accel.sh@17 -- # local accel_module
00:09:45.030    23:43:15	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:09:45.030    23:43:15	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:09:45.030     23:43:15	-- accel/accel.sh@12 -- # build_accel_config
00:09:45.030     23:43:15	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:45.030     23:43:15	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:45.030     23:43:15	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:45.030     23:43:15	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:45.030     23:43:15	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:45.030     23:43:15	-- accel/accel.sh@41 -- # local IFS=,
00:09:45.030     23:43:15	-- accel/accel.sh@42 -- # jq -r .
00:09:45.030  [2024-12-13 23:43:15.589682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:45.030  [2024-12-13 23:43:15.589902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106748 ]
00:09:45.030  [2024-12-13 23:43:15.761176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:45.289  [2024-12-13 23:43:15.972782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:47.824   23:43:17	-- accel/accel.sh@18 -- # out='
00:09:47.824  SPDK Configuration:
00:09:47.824  Core mask:      0x1
00:09:47.824  
00:09:47.824  Accel Perf Configuration:
00:09:47.824  Workload Type:  copy_crc32c
00:09:47.824  CRC-32C seed:   0
00:09:47.824  Vector size:    4096 bytes
00:09:47.824  Transfer size:  8192 bytes
00:09:47.824  Vector count    2
00:09:47.824  Module:         software
00:09:47.824  Queue depth:    32
00:09:47.824  Allocate depth: 32
00:09:47.824  # threads/core: 1
00:09:47.824  Run time:       1 seconds
00:09:47.824  Verify:         Yes
00:09:47.824  
00:09:47.824  Running for 1 seconds...
00:09:47.824  
00:09:47.824  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:47.824  ------------------------------------------------------------------------------------
00:09:47.824  0,0                      178144/s       1391 MiB/s                0                0
00:09:47.824  ====================================================================================
00:09:47.824  Total                    178144/s        695 MiB/s                0                0'
00:09:47.824   23:43:17	-- accel/accel.sh@20 -- # IFS=:
00:09:47.824   23:43:17	-- accel/accel.sh@20 -- # read -r var val
00:09:47.824    23:43:17	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:09:47.824    23:43:17	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:09:47.824     23:43:17	-- accel/accel.sh@12 -- # build_accel_config
00:09:47.824     23:43:17	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:47.824     23:43:17	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:47.824     23:43:17	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:47.824     23:43:17	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:47.824     23:43:17	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:47.824     23:43:17	-- accel/accel.sh@41 -- # local IFS=,
00:09:47.824     23:43:17	-- accel/accel.sh@42 -- # jq -r .
00:09:47.824  [2024-12-13 23:43:17.995758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:47.824  [2024-12-13 23:43:17.995954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106783 ]
00:09:47.824  [2024-12-13 23:43:18.166343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:47.824  [2024-12-13 23:43:18.372065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=0x1
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=copy_crc32c
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=0
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val='8192 bytes'
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=software
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@23 -- # accel_module=software
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=32
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=32
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=1
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=Yes
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:48.084   23:43:18	-- accel/accel.sh@21 -- # val=
00:09:48.084   23:43:18	-- accel/accel.sh@22 -- # case "$var" in
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # IFS=:
00:09:48.084   23:43:18	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@21 -- # val=
00:09:49.989   23:43:20	-- accel/accel.sh@22 -- # case "$var" in
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # IFS=:
00:09:49.989   23:43:20	-- accel/accel.sh@20 -- # read -r var val
00:09:49.989   23:43:20	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:49.989   23:43:20	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:09:49.989   23:43:20	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:49.989  
00:09:49.989  real	0m4.836s
00:09:49.989  user	0m4.268s
00:09:49.989  sys	0m0.409s
00:09:49.989   23:43:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:49.989   23:43:20	-- common/autotest_common.sh@10 -- # set +x
00:09:49.989  ************************************
00:09:49.989  END TEST accel_copy_crc32c_C2
00:09:49.989  ************************************
00:09:49.989   23:43:20	-- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y
00:09:49.989   23:43:20	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:49.989   23:43:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:49.989   23:43:20	-- common/autotest_common.sh@10 -- # set +x
00:09:49.989  ************************************
00:09:49.990  START TEST accel_dualcast
00:09:49.990  ************************************
00:09:49.990   23:43:20	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y
00:09:49.990   23:43:20	-- accel/accel.sh@16 -- # local accel_opc
00:09:49.990   23:43:20	-- accel/accel.sh@17 -- # local accel_module
00:09:49.990    23:43:20	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y
00:09:49.990    23:43:20	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:09:49.990     23:43:20	-- accel/accel.sh@12 -- # build_accel_config
00:09:49.990     23:43:20	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:49.990     23:43:20	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:49.990     23:43:20	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:49.990     23:43:20	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:49.990     23:43:20	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:49.990     23:43:20	-- accel/accel.sh@41 -- # local IFS=,
00:09:49.990     23:43:20	-- accel/accel.sh@42 -- # jq -r .
00:09:49.990  [2024-12-13 23:43:20.484797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:49.990  [2024-12-13 23:43:20.485028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106840 ]
00:09:49.990  [2024-12-13 23:43:20.657783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:50.249  [2024-12-13 23:43:20.862164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:52.166   23:43:22	-- accel/accel.sh@18 -- # out='
00:09:52.166  SPDK Configuration:
00:09:52.166  Core mask:      0x1
00:09:52.166  
00:09:52.166  Accel Perf Configuration:
00:09:52.166  Workload Type:  dualcast
00:09:52.166  Transfer size:  4096 bytes
00:09:52.167  Vector count    1
00:09:52.167  Module:         software
00:09:52.167  Queue depth:    32
00:09:52.167  Allocate depth: 32
00:09:52.167  # threads/core: 1
00:09:52.167  Run time:       1 seconds
00:09:52.167  Verify:         Yes
00:09:52.167  
00:09:52.167  Running for 1 seconds...
00:09:52.167  
00:09:52.167  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:52.167  ------------------------------------------------------------------------------------
00:09:52.167  0,0                      331136/s       1293 MiB/s                0                0
00:09:52.167  ====================================================================================
00:09:52.167  Total                    331136/s       1293 MiB/s                0                0'
00:09:52.167   23:43:22	-- accel/accel.sh@20 -- # IFS=:
00:09:52.167   23:43:22	-- accel/accel.sh@20 -- # read -r var val
00:09:52.167    23:43:22	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y
00:09:52.167    23:43:22	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:09:52.167     23:43:22	-- accel/accel.sh@12 -- # build_accel_config
00:09:52.167     23:43:22	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:52.167     23:43:22	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:52.167     23:43:22	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:52.167     23:43:22	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:52.167     23:43:22	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:52.167     23:43:22	-- accel/accel.sh@41 -- # local IFS=,
00:09:52.167     23:43:22	-- accel/accel.sh@42 -- # jq -r .
00:09:52.167  [2024-12-13 23:43:22.891355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:52.167  [2024-12-13 23:43:22.891542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106879 ]
00:09:52.425  [2024-12-13 23:43:23.060984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:52.684  [2024-12-13 23:43:23.273985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=0x1
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=dualcast
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@24 -- # accel_opc=dualcast
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=software
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@23 -- # accel_module=software
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=32
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=32
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=1
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=Yes
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:52.943   23:43:23	-- accel/accel.sh@21 -- # val=
00:09:52.943   23:43:23	-- accel/accel.sh@22 -- # case "$var" in
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # IFS=:
00:09:52.943   23:43:23	-- accel/accel.sh@20 -- # read -r var val
00:09:54.845   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.845   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.845   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.846   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.846   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.846   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.846   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@21 -- # val=
00:09:54.846   23:43:25	-- accel/accel.sh@22 -- # case "$var" in
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # IFS=:
00:09:54.846   23:43:25	-- accel/accel.sh@20 -- # read -r var val
00:09:54.846   23:43:25	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:54.846   23:43:25	-- accel/accel.sh@28 -- # [[ -n dualcast ]]
00:09:54.846   23:43:25	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:54.846  
00:09:54.846  real	0m4.830s
00:09:54.846  user	0m4.237s
00:09:54.846  sys	0m0.413s
00:09:54.846   23:43:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:54.846   23:43:25	-- common/autotest_common.sh@10 -- # set +x
00:09:54.846  ************************************
00:09:54.846  END TEST accel_dualcast
00:09:54.846  ************************************
00:09:54.846   23:43:25	-- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y
00:09:54.846   23:43:25	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:54.846   23:43:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:54.846   23:43:25	-- common/autotest_common.sh@10 -- # set +x
00:09:54.846  ************************************
00:09:54.846  START TEST accel_compare
00:09:54.846  ************************************
00:09:54.846   23:43:25	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y
00:09:54.846   23:43:25	-- accel/accel.sh@16 -- # local accel_opc
00:09:54.846   23:43:25	-- accel/accel.sh@17 -- # local accel_module
00:09:54.846    23:43:25	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y
00:09:54.846    23:43:25	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:09:54.846     23:43:25	-- accel/accel.sh@12 -- # build_accel_config
00:09:54.846     23:43:25	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:54.846     23:43:25	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:54.846     23:43:25	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:54.846     23:43:25	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:54.846     23:43:25	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:54.846     23:43:25	-- accel/accel.sh@41 -- # local IFS=,
00:09:54.846     23:43:25	-- accel/accel.sh@42 -- # jq -r .
00:09:54.846  [2024-12-13 23:43:25.360264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:54.846  [2024-12-13 23:43:25.360453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106931 ]
00:09:54.846  [2024-12-13 23:43:25.529640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:55.104  [2024-12-13 23:43:25.715551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:57.007   23:43:27	-- accel/accel.sh@18 -- # out='
00:09:57.007  SPDK Configuration:
00:09:57.007  Core mask:      0x1
00:09:57.007  
00:09:57.007  Accel Perf Configuration:
00:09:57.007  Workload Type:  compare
00:09:57.007  Transfer size:  4096 bytes
00:09:57.007  Vector count    1
00:09:57.007  Module:         software
00:09:57.007  Queue depth:    32
00:09:57.007  Allocate depth: 32
00:09:57.007  # threads/core: 1
00:09:57.007  Run time:       1 seconds
00:09:57.007  Verify:         Yes
00:09:57.007  
00:09:57.007  Running for 1 seconds...
00:09:57.007  
00:09:57.007  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:57.007  ------------------------------------------------------------------------------------
00:09:57.007  0,0                      465344/s       1817 MiB/s                0                0
00:09:57.007  ====================================================================================
00:09:57.007  Total                    465344/s       1817 MiB/s                0                0'
00:09:57.007   23:43:27	-- accel/accel.sh@20 -- # IFS=:
00:09:57.007   23:43:27	-- accel/accel.sh@20 -- # read -r var val
00:09:57.007    23:43:27	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y
00:09:57.007    23:43:27	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:09:57.007     23:43:27	-- accel/accel.sh@12 -- # build_accel_config
00:09:57.007     23:43:27	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:57.007     23:43:27	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:57.007     23:43:27	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:57.008     23:43:27	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:57.008     23:43:27	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:57.008     23:43:27	-- accel/accel.sh@41 -- # local IFS=,
00:09:57.008     23:43:27	-- accel/accel.sh@42 -- # jq -r .
00:09:57.008  [2024-12-13 23:43:27.733721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:57.008  [2024-12-13 23:43:27.733914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106966 ]
00:09:57.266  [2024-12-13 23:43:27.901958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:57.525  [2024-12-13 23:43:28.108569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=0x1
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=compare
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@24 -- # accel_opc=compare
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=software
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@23 -- # accel_module=software
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=32
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=32
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=1
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=Yes
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:57.784   23:43:28	-- accel/accel.sh@21 -- # val=
00:09:57.784   23:43:28	-- accel/accel.sh@22 -- # case "$var" in
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # IFS=:
00:09:57.784   23:43:28	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@21 -- # val=
00:09:59.687   23:43:30	-- accel/accel.sh@22 -- # case "$var" in
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # IFS=:
00:09:59.687   23:43:30	-- accel/accel.sh@20 -- # read -r var val
00:09:59.687   23:43:30	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:59.687   23:43:30	-- accel/accel.sh@28 -- # [[ -n compare ]]
00:09:59.687   23:43:30	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:59.687  
00:09:59.687  real	0m4.779s
00:09:59.687  user	0m4.166s
00:09:59.687  sys	0m0.451s
00:09:59.687   23:43:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:59.687   23:43:30	-- common/autotest_common.sh@10 -- # set +x
00:09:59.687  ************************************
00:09:59.687  END TEST accel_compare
00:09:59.687  ************************************
00:09:59.687   23:43:30	-- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y
00:09:59.687   23:43:30	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:59.687   23:43:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:59.687   23:43:30	-- common/autotest_common.sh@10 -- # set +x
00:09:59.687  ************************************
00:09:59.687  START TEST accel_xor
00:09:59.687  ************************************
00:09:59.687   23:43:30	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y
00:09:59.687   23:43:30	-- accel/accel.sh@16 -- # local accel_opc
00:09:59.687   23:43:30	-- accel/accel.sh@17 -- # local accel_module
00:09:59.687    23:43:30	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y
00:09:59.687    23:43:30	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:09:59.687     23:43:30	-- accel/accel.sh@12 -- # build_accel_config
00:09:59.687     23:43:30	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:59.687     23:43:30	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:59.687     23:43:30	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:59.687     23:43:30	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:59.687     23:43:30	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:59.687     23:43:30	-- accel/accel.sh@41 -- # local IFS=,
00:09:59.687     23:43:30	-- accel/accel.sh@42 -- # jq -r .
00:09:59.687  [2024-12-13 23:43:30.195096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:59.687  [2024-12-13 23:43:30.195292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107017 ]
00:09:59.687  [2024-12-13 23:43:30.364188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:59.945  [2024-12-13 23:43:30.545525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:01.846   23:43:32	-- accel/accel.sh@18 -- # out='
00:10:01.846  SPDK Configuration:
00:10:01.846  Core mask:      0x1
00:10:01.846  
00:10:01.846  Accel Perf Configuration:
00:10:01.846  Workload Type:  xor
00:10:01.846  Source buffers: 2
00:10:01.846  Transfer size:  4096 bytes
00:10:01.846  Vector count    1
00:10:01.846  Module:         software
00:10:01.846  Queue depth:    32
00:10:01.846  Allocate depth: 32
00:10:01.846  # threads/core: 1
00:10:01.846  Run time:       1 seconds
00:10:01.846  Verify:         Yes
00:10:01.846  
00:10:01.846  Running for 1 seconds...
00:10:01.846  
00:10:01.846  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:01.846  ------------------------------------------------------------------------------------
00:10:01.846  0,0                      225024/s        879 MiB/s                0                0
00:10:01.846  ====================================================================================
00:10:01.846  Total                    225024/s        879 MiB/s                0                0'
00:10:01.846   23:43:32	-- accel/accel.sh@20 -- # IFS=:
00:10:01.846   23:43:32	-- accel/accel.sh@20 -- # read -r var val
00:10:01.846    23:43:32	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y
00:10:01.846    23:43:32	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:10:01.846     23:43:32	-- accel/accel.sh@12 -- # build_accel_config
00:10:01.846     23:43:32	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:01.846     23:43:32	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:01.846     23:43:32	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:01.846     23:43:32	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:01.846     23:43:32	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:01.846     23:43:32	-- accel/accel.sh@41 -- # local IFS=,
00:10:01.846     23:43:32	-- accel/accel.sh@42 -- # jq -r .
00:10:01.846  [2024-12-13 23:43:32.566161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:01.846  [2024-12-13 23:43:32.566500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107060 ]
00:10:02.104  [2024-12-13 23:43:32.733478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:02.363  [2024-12-13 23:43:32.928756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=0x1
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=xor
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@24 -- # accel_opc=xor
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=2
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=software
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@23 -- # accel_module=software
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=32
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=32
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=1
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=Yes
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:02.622   23:43:33	-- accel/accel.sh@21 -- # val=
00:10:02.622   23:43:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # IFS=:
00:10:02.622   23:43:33	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@21 -- # val=
00:10:04.526   23:43:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # IFS=:
00:10:04.526   23:43:34	-- accel/accel.sh@20 -- # read -r var val
00:10:04.526   23:43:34	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:04.526   23:43:34	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:10:04.526   23:43:34	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:04.526  
00:10:04.526  real	0m4.778s
00:10:04.526  user	0m4.199s
00:10:04.526  sys	0m0.419s
00:10:04.526  ************************************
00:10:04.526   23:43:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:04.526   23:43:34	-- common/autotest_common.sh@10 -- # set +x
00:10:04.526  END TEST accel_xor
00:10:04.526  ************************************
00:10:04.526   23:43:34	-- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3
00:10:04.526   23:43:34	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:04.526   23:43:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:04.526   23:43:34	-- common/autotest_common.sh@10 -- # set +x
00:10:04.526  ************************************
00:10:04.526  START TEST accel_xor
00:10:04.526  ************************************
00:10:04.526   23:43:34	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3
00:10:04.526   23:43:34	-- accel/accel.sh@16 -- # local accel_opc
00:10:04.526   23:43:34	-- accel/accel.sh@17 -- # local accel_module
00:10:04.526    23:43:34	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3
00:10:04.526    23:43:34	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:10:04.526     23:43:34	-- accel/accel.sh@12 -- # build_accel_config
00:10:04.526     23:43:34	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:04.526     23:43:34	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:04.526     23:43:34	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:04.526     23:43:34	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:04.526     23:43:34	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:04.526     23:43:34	-- accel/accel.sh@41 -- # local IFS=,
00:10:04.526     23:43:34	-- accel/accel.sh@42 -- # jq -r .
00:10:04.526  [2024-12-13 23:43:35.027236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:04.526  [2024-12-13 23:43:35.027593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107105 ]
00:10:04.526  [2024-12-13 23:43:35.197359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:04.784  [2024-12-13 23:43:35.378939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.686   23:43:37	-- accel/accel.sh@18 -- # out='
00:10:06.686  SPDK Configuration:
00:10:06.686  Core mask:      0x1
00:10:06.686  
00:10:06.686  Accel Perf Configuration:
00:10:06.686  Workload Type:  xor
00:10:06.687  Source buffers: 3
00:10:06.687  Transfer size:  4096 bytes
00:10:06.687  Vector count    1
00:10:06.687  Module:         software
00:10:06.687  Queue depth:    32
00:10:06.687  Allocate depth: 32
00:10:06.687  # threads/core: 1
00:10:06.687  Run time:       1 seconds
00:10:06.687  Verify:         Yes
00:10:06.687  
00:10:06.687  Running for 1 seconds...
00:10:06.687  
00:10:06.687  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:06.687  ------------------------------------------------------------------------------------
00:10:06.687  0,0                      216512/s        845 MiB/s                0                0
00:10:06.687  ====================================================================================
00:10:06.687  Total                    216512/s        845 MiB/s                0                0'
00:10:06.687   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:06.687   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:06.687    23:43:37	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3
00:10:06.687    23:43:37	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:10:06.687     23:43:37	-- accel/accel.sh@12 -- # build_accel_config
00:10:06.687     23:43:37	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:06.687     23:43:37	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:06.687     23:43:37	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:06.687     23:43:37	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:06.687     23:43:37	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:06.687     23:43:37	-- accel/accel.sh@41 -- # local IFS=,
00:10:06.687     23:43:37	-- accel/accel.sh@42 -- # jq -r .
00:10:06.687  [2024-12-13 23:43:37.400908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:06.687  [2024-12-13 23:43:37.401281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107138 ]
00:10:06.946  [2024-12-13 23:43:37.569789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:07.204  [2024-12-13 23:43:37.770681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=0x1
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=xor
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@24 -- # accel_opc=xor
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=3
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=software
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@23 -- # accel_module=software
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=32
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=32
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=1
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=Yes
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:07.463   23:43:37	-- accel/accel.sh@21 -- # val=
00:10:07.463   23:43:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # IFS=:
00:10:07.463   23:43:37	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@21 -- # val=
00:10:09.384   23:43:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # IFS=:
00:10:09.384   23:43:39	-- accel/accel.sh@20 -- # read -r var val
00:10:09.384   23:43:39	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:09.384   23:43:39	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:10:09.384   23:43:39	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:09.384  
00:10:09.384  real	0m4.776s
00:10:09.384  user	0m4.214s
00:10:09.384  sys	0m0.397s
00:10:09.384   23:43:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:09.384   23:43:39	-- common/autotest_common.sh@10 -- # set +x
00:10:09.384  ************************************
00:10:09.384  END TEST accel_xor
00:10:09.384  ************************************
00:10:09.384   23:43:39	-- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify
00:10:09.384   23:43:39	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:09.384   23:43:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:09.384   23:43:39	-- common/autotest_common.sh@10 -- # set +x
00:10:09.384  ************************************
00:10:09.384  START TEST accel_dif_verify
00:10:09.384  ************************************
00:10:09.384   23:43:39	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify
00:10:09.384   23:43:39	-- accel/accel.sh@16 -- # local accel_opc
00:10:09.384   23:43:39	-- accel/accel.sh@17 -- # local accel_module
00:10:09.384    23:43:39	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify
00:10:09.384    23:43:39	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:10:09.384     23:43:39	-- accel/accel.sh@12 -- # build_accel_config
00:10:09.384     23:43:39	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:09.384     23:43:39	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:09.384     23:43:39	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:09.384     23:43:39	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:09.384     23:43:39	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:09.384     23:43:39	-- accel/accel.sh@41 -- # local IFS=,
00:10:09.384     23:43:39	-- accel/accel.sh@42 -- # jq -r .
00:10:09.384  [2024-12-13 23:43:39.859130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:09.384  [2024-12-13 23:43:39.859503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107190 ]
00:10:09.384  [2024-12-13 23:43:40.024210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:09.663  [2024-12-13 23:43:40.205491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:11.564   23:43:42	-- accel/accel.sh@18 -- # out='
00:10:11.564  SPDK Configuration:
00:10:11.564  Core mask:      0x1
00:10:11.564  
00:10:11.564  Accel Perf Configuration:
00:10:11.564  Workload Type:  dif_verify
00:10:11.564  Vector size:    4096 bytes
00:10:11.564  Transfer size:  4096 bytes
00:10:11.564  Block size:     512 bytes
00:10:11.564  Metadata size:  8 bytes
00:10:11.564  Vector count    1
00:10:11.564  Module:         software
00:10:11.564  Queue depth:    32
00:10:11.564  Allocate depth: 32
00:10:11.564  # threads/core: 1
00:10:11.564  Run time:       1 seconds
00:10:11.564  Verify:         No
00:10:11.564  
00:10:11.564  Running for 1 seconds...
00:10:11.564  
00:10:11.564  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:11.564  ------------------------------------------------------------------------------------
00:10:11.564  0,0                      117760/s        467 MiB/s                0                0
00:10:11.564  ====================================================================================
00:10:11.564  Total                    117760/s        460 MiB/s                0                0'
00:10:11.564    23:43:42	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify
00:10:11.564   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:11.564   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:11.564    23:43:42	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:10:11.564     23:43:42	-- accel/accel.sh@12 -- # build_accel_config
00:10:11.564     23:43:42	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:11.564     23:43:42	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:11.564     23:43:42	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:11.564     23:43:42	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:11.564     23:43:42	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:11.564     23:43:42	-- accel/accel.sh@41 -- # local IFS=,
00:10:11.564     23:43:42	-- accel/accel.sh@42 -- # jq -r .
00:10:11.564  [2024-12-13 23:43:42.226098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:11.564  [2024-12-13 23:43:42.227156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107231 ]
00:10:11.823  [2024-12-13 23:43:42.396759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:12.081  [2024-12-13 23:43:42.600288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=0x1
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=dif_verify
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@24 -- # accel_opc=dif_verify
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val='512 bytes'
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val='8 bytes'
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=software
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@23 -- # accel_module=software
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=32
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=32
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=1
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=No
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:12.340   23:43:42	-- accel/accel.sh@21 -- # val=
00:10:12.340   23:43:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # IFS=:
00:10:12.340   23:43:42	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.242   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.242   23:43:44	-- accel/accel.sh@21 -- # val=
00:10:14.242   23:43:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.243   23:43:44	-- accel/accel.sh@20 -- # IFS=:
00:10:14.243   23:43:44	-- accel/accel.sh@20 -- # read -r var val
00:10:14.243   23:43:44	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:14.243   23:43:44	-- accel/accel.sh@28 -- # [[ -n dif_verify ]]
00:10:14.243   23:43:44	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:14.243  
00:10:14.243  real	0m4.777s
00:10:14.243  user	0m4.179s
00:10:14.243  sys	0m0.434s
00:10:14.243   23:43:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:14.243   23:43:44	-- common/autotest_common.sh@10 -- # set +x
00:10:14.243  ************************************
00:10:14.243  END TEST accel_dif_verify
00:10:14.243  ************************************
00:10:14.243   23:43:44	-- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate
00:10:14.243   23:43:44	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:14.243   23:43:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:14.243   23:43:44	-- common/autotest_common.sh@10 -- # set +x
00:10:14.243  ************************************
00:10:14.243  START TEST accel_dif_generate
00:10:14.243  ************************************
00:10:14.243   23:43:44	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate
00:10:14.243   23:43:44	-- accel/accel.sh@16 -- # local accel_opc
00:10:14.243   23:43:44	-- accel/accel.sh@17 -- # local accel_module
00:10:14.243    23:43:44	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate
00:10:14.243     23:43:44	-- accel/accel.sh@12 -- # build_accel_config
00:10:14.243    23:43:44	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:10:14.243     23:43:44	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:14.243     23:43:44	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:14.243     23:43:44	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:14.243     23:43:44	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:14.243     23:43:44	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:14.243     23:43:44	-- accel/accel.sh@41 -- # local IFS=,
00:10:14.243     23:43:44	-- accel/accel.sh@42 -- # jq -r .
00:10:14.243  [2024-12-13 23:43:44.687134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:14.243  [2024-12-13 23:43:44.687464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107284 ]
00:10:14.243  [2024-12-13 23:43:44.857261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:14.501  [2024-12-13 23:43:45.037339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:16.403   23:43:47	-- accel/accel.sh@18 -- # out='
00:10:16.403  SPDK Configuration:
00:10:16.403  Core mask:      0x1
00:10:16.403  
00:10:16.403  Accel Perf Configuration:
00:10:16.403  Workload Type:  dif_generate
00:10:16.403  Vector size:    4096 bytes
00:10:16.403  Transfer size:  4096 bytes
00:10:16.403  Block size:     512 bytes
00:10:16.403  Metadata size:  8 bytes
00:10:16.403  Vector count    1
00:10:16.403  Module:         software
00:10:16.403  Queue depth:    32
00:10:16.403  Allocate depth: 32
00:10:16.403  # threads/core: 1
00:10:16.403  Run time:       1 seconds
00:10:16.403  Verify:         No
00:10:16.403  
00:10:16.403  Running for 1 seconds...
00:10:16.403  
00:10:16.403  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:16.403  ------------------------------------------------------------------------------------
00:10:16.403  0,0                      142144/s        563 MiB/s                0                0
00:10:16.403  ====================================================================================
00:10:16.403  Total                    142144/s        555 MiB/s                0                0'
00:10:16.403   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.403   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.403    23:43:47	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate
00:10:16.403    23:43:47	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:10:16.403     23:43:47	-- accel/accel.sh@12 -- # build_accel_config
00:10:16.403     23:43:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:16.403     23:43:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:16.403     23:43:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:16.403     23:43:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:16.403     23:43:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:16.403     23:43:47	-- accel/accel.sh@41 -- # local IFS=,
00:10:16.403     23:43:47	-- accel/accel.sh@42 -- # jq -r .
00:10:16.403  [2024-12-13 23:43:47.061513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:16.403  [2024-12-13 23:43:47.061882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107319 ]
00:10:16.662  [2024-12-13 23:43:47.229449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:16.920  [2024-12-13 23:43:47.424565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=0x1
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=dif_generate
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@24 -- # accel_opc=dif_generate
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val='512 bytes'
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val='8 bytes'
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=software
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@23 -- # accel_module=software
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=32
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=32
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=1
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=No
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:16.920   23:43:47	-- accel/accel.sh@21 -- # val=
00:10:16.920   23:43:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # IFS=:
00:10:16.920   23:43:47	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@21 -- # val=
00:10:18.822   23:43:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # IFS=:
00:10:18.822   23:43:49	-- accel/accel.sh@20 -- # read -r var val
00:10:18.822   23:43:49	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:18.822   23:43:49	-- accel/accel.sh@28 -- # [[ -n dif_generate ]]
00:10:18.822   23:43:49	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:18.822  
00:10:18.822  real	0m4.774s
00:10:18.822  user	0m4.212s
00:10:18.822  sys	0m0.394s
00:10:18.822   23:43:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:18.822   23:43:49	-- common/autotest_common.sh@10 -- # set +x
00:10:18.822  ************************************
00:10:18.822  END TEST accel_dif_generate
00:10:18.822  ************************************
00:10:18.822   23:43:49	-- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy
00:10:18.822   23:43:49	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:18.822   23:43:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:18.822   23:43:49	-- common/autotest_common.sh@10 -- # set +x
00:10:18.822  ************************************
00:10:18.822  START TEST accel_dif_generate_copy
00:10:18.822  ************************************
00:10:18.822   23:43:49	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy
00:10:18.822   23:43:49	-- accel/accel.sh@16 -- # local accel_opc
00:10:18.822   23:43:49	-- accel/accel.sh@17 -- # local accel_module
00:10:18.822    23:43:49	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy
00:10:18.822    23:43:49	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:10:18.822     23:43:49	-- accel/accel.sh@12 -- # build_accel_config
00:10:18.822     23:43:49	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:18.822     23:43:49	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:18.822     23:43:49	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:18.822     23:43:49	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:18.822     23:43:49	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:18.822     23:43:49	-- accel/accel.sh@41 -- # local IFS=,
00:10:18.822     23:43:49	-- accel/accel.sh@42 -- # jq -r .
00:10:18.822  [2024-12-13 23:43:49.509759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:18.822  [2024-12-13 23:43:49.510101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107364 ]
00:10:19.081  [2024-12-13 23:43:49.662132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:19.340  [2024-12-13 23:43:49.865665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:21.242   23:43:51	-- accel/accel.sh@18 -- # out='
00:10:21.242  SPDK Configuration:
00:10:21.242  Core mask:      0x1
00:10:21.242  
00:10:21.242  Accel Perf Configuration:
00:10:21.242  Workload Type:  dif_generate_copy
00:10:21.242  Vector size:    4096 bytes
00:10:21.242  Transfer size:  4096 bytes
00:10:21.242  Vector count    1
00:10:21.242  Module:         software
00:10:21.242  Queue depth:    32
00:10:21.242  Allocate depth: 32
00:10:21.242  # threads/core: 1
00:10:21.242  Run time:       1 seconds
00:10:21.242  Verify:         No
00:10:21.242  
00:10:21.242  Running for 1 seconds...
00:10:21.242  
00:10:21.242  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:21.242  ------------------------------------------------------------------------------------
00:10:21.242  0,0                      109856/s        435 MiB/s                0                0
00:10:21.242  ====================================================================================
00:10:21.242  Total                    109856/s        429 MiB/s                0                0'
00:10:21.242   23:43:51	-- accel/accel.sh@20 -- # IFS=:
00:10:21.242   23:43:51	-- accel/accel.sh@20 -- # read -r var val
00:10:21.242    23:43:51	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy
00:10:21.242     23:43:51	-- accel/accel.sh@12 -- # build_accel_config
00:10:21.242    23:43:51	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:10:21.242     23:43:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:21.242     23:43:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:21.242     23:43:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:21.242     23:43:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:21.242     23:43:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:21.242     23:43:51	-- accel/accel.sh@41 -- # local IFS=,
00:10:21.242     23:43:51	-- accel/accel.sh@42 -- # jq -r .
00:10:21.242  [2024-12-13 23:43:51.882685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:21.243  [2024-12-13 23:43:51.883046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107408 ]
00:10:21.501  [2024-12-13 23:43:52.050678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:21.760  [2024-12-13 23:43:52.267576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=0x1
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=dif_generate_copy
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@24 -- # accel_opc=dif_generate_copy
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=software
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@23 -- # accel_module=software
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=32
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=32
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=1
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=No
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:21.760   23:43:52	-- accel/accel.sh@21 -- # val=
00:10:21.760   23:43:52	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # IFS=:
00:10:21.760   23:43:52	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@21 -- # val=
00:10:23.663   23:43:54	-- accel/accel.sh@22 -- # case "$var" in
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # IFS=:
00:10:23.663   23:43:54	-- accel/accel.sh@20 -- # read -r var val
00:10:23.663   23:43:54	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:23.663   23:43:54	-- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]]
00:10:23.663   23:43:54	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:23.663  
00:10:23.663  real	0m4.776s
00:10:23.663  user	0m4.204s
00:10:23.663  sys	0m0.400s
00:10:23.663   23:43:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:23.663   23:43:54	-- common/autotest_common.sh@10 -- # set +x
00:10:23.663  ************************************
00:10:23.663  END TEST accel_dif_generate_copy
00:10:23.663  ************************************
00:10:23.663   23:43:54	-- accel/accel.sh@107 -- # [[ y == y ]]
00:10:23.663   23:43:54	-- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:23.663   23:43:54	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:10:23.663   23:43:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:23.663   23:43:54	-- common/autotest_common.sh@10 -- # set +x
00:10:23.663  ************************************
00:10:23.663  START TEST accel_comp
00:10:23.663  ************************************
00:10:23.663   23:43:54	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:23.663   23:43:54	-- accel/accel.sh@16 -- # local accel_opc
00:10:23.663   23:43:54	-- accel/accel.sh@17 -- # local accel_module
00:10:23.663    23:43:54	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:23.663    23:43:54	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:23.663     23:43:54	-- accel/accel.sh@12 -- # build_accel_config
00:10:23.663     23:43:54	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:23.663     23:43:54	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:23.663     23:43:54	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:23.663     23:43:54	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:23.663     23:43:54	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:23.663     23:43:54	-- accel/accel.sh@41 -- # local IFS=,
00:10:23.663     23:43:54	-- accel/accel.sh@42 -- # jq -r .
00:10:23.663  [2024-12-13 23:43:54.356135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:23.663  [2024-12-13 23:43:54.356507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107458 ]
00:10:23.922  [2024-12-13 23:43:54.529118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:24.180  [2024-12-13 23:43:54.709953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:26.083   23:43:56	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:26.083  
00:10:26.083  SPDK Configuration:
00:10:26.083  Core mask:      0x1
00:10:26.083  
00:10:26.083  Accel Perf Configuration:
00:10:26.083  Workload Type:  compress
00:10:26.083  Transfer size:  4096 bytes
00:10:26.083  Vector count    1
00:10:26.083  Module:         software
00:10:26.083  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:26.083  Queue depth:    32
00:10:26.083  Allocate depth: 32
00:10:26.083  # threads/core: 1
00:10:26.083  Run time:       1 seconds
00:10:26.083  Verify:         No
00:10:26.083  
00:10:26.083  Running for 1 seconds...
00:10:26.083  
00:10:26.083  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:26.083  ------------------------------------------------------------------------------------
00:10:26.083  0,0                       60128/s        250 MiB/s                0                0
00:10:26.083  ====================================================================================
00:10:26.083  Total                     60128/s        234 MiB/s                0                0'
00:10:26.083   23:43:56	-- accel/accel.sh@20 -- # IFS=:
00:10:26.083   23:43:56	-- accel/accel.sh@20 -- # read -r var val
00:10:26.083    23:43:56	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:26.083    23:43:56	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:26.083     23:43:56	-- accel/accel.sh@12 -- # build_accel_config
00:10:26.083     23:43:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:26.083     23:43:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:26.083     23:43:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:26.083     23:43:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:26.083     23:43:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:26.083     23:43:56	-- accel/accel.sh@41 -- # local IFS=,
00:10:26.083     23:43:56	-- accel/accel.sh@42 -- # jq -r .
00:10:26.083  [2024-12-13 23:43:56.736497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:26.083  [2024-12-13 23:43:56.736857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107493 ]
00:10:26.342  [2024-12-13 23:43:56.905254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:26.601  [2024-12-13 23:43:57.101038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=0x1
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=compress
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@24 -- # accel_opc=compress
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=software
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@23 -- # accel_module=software
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=32
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=32
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=1
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=No
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.601   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.601   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.601   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.602   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:26.602   23:43:57	-- accel/accel.sh@21 -- # val=
00:10:26.602   23:43:57	-- accel/accel.sh@22 -- # case "$var" in
00:10:26.602   23:43:57	-- accel/accel.sh@20 -- # IFS=:
00:10:26.602   23:43:57	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@21 -- # val=
00:10:28.507   23:43:59	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # IFS=:
00:10:28.507   23:43:59	-- accel/accel.sh@20 -- # read -r var val
00:10:28.507   23:43:59	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:28.507   23:43:59	-- accel/accel.sh@28 -- # [[ -n compress ]]
00:10:28.507   23:43:59	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:28.507  
00:10:28.507  real	0m4.790s
00:10:28.507  user	0m4.196s
00:10:28.507  sys	0m0.429s
00:10:28.507   23:43:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:28.507   23:43:59	-- common/autotest_common.sh@10 -- # set +x
00:10:28.507  ************************************
00:10:28.507  END TEST accel_comp
00:10:28.507  ************************************
00:10:28.507   23:43:59	-- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:28.507   23:43:59	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:28.507   23:43:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:28.507   23:43:59	-- common/autotest_common.sh@10 -- # set +x
00:10:28.507  ************************************
00:10:28.507  START TEST accel_decomp
00:10:28.507  ************************************
00:10:28.507   23:43:59	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:28.507   23:43:59	-- accel/accel.sh@16 -- # local accel_opc
00:10:28.507   23:43:59	-- accel/accel.sh@17 -- # local accel_module
00:10:28.507    23:43:59	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:28.507    23:43:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:28.507     23:43:59	-- accel/accel.sh@12 -- # build_accel_config
00:10:28.507     23:43:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:28.507     23:43:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:28.507     23:43:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:28.507     23:43:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:28.507     23:43:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:28.507     23:43:59	-- accel/accel.sh@41 -- # local IFS=,
00:10:28.507     23:43:59	-- accel/accel.sh@42 -- # jq -r .
00:10:28.507  [2024-12-13 23:43:59.194401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:28.507  [2024-12-13 23:43:59.194716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107545 ]
00:10:28.766  [2024-12-13 23:43:59.363911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:29.025  [2024-12-13 23:43:59.544725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:30.929   23:44:01	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:30.929  
00:10:30.929  SPDK Configuration:
00:10:30.929  Core mask:      0x1
00:10:30.929  
00:10:30.929  Accel Perf Configuration:
00:10:30.929  Workload Type:  decompress
00:10:30.929  Transfer size:  4096 bytes
00:10:30.929  Vector count    1
00:10:30.929  Module:         software
00:10:30.929  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:30.929  Queue depth:    32
00:10:30.929  Allocate depth: 32
00:10:30.929  # threads/core: 1
00:10:30.929  Run time:       1 seconds
00:10:30.929  Verify:         Yes
00:10:30.929  
00:10:30.929  Running for 1 seconds...
00:10:30.929  
00:10:30.929  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:30.929  ------------------------------------------------------------------------------------
00:10:30.929  0,0                       75168/s        138 MiB/s                0                0
00:10:30.929  ====================================================================================
00:10:30.929  Total                     75168/s        293 MiB/s                0                0'
00:10:30.929   23:44:01	-- accel/accel.sh@20 -- # IFS=:
00:10:30.929   23:44:01	-- accel/accel.sh@20 -- # read -r var val
00:10:30.929    23:44:01	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:30.929    23:44:01	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:30.929     23:44:01	-- accel/accel.sh@12 -- # build_accel_config
00:10:30.929     23:44:01	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:30.929     23:44:01	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:30.929     23:44:01	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:30.929     23:44:01	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:30.929     23:44:01	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:30.929     23:44:01	-- accel/accel.sh@41 -- # local IFS=,
00:10:30.929     23:44:01	-- accel/accel.sh@42 -- # jq -r .
00:10:30.929  [2024-12-13 23:44:01.573473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:30.929  [2024-12-13 23:44:01.573849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107580 ]
00:10:31.187  [2024-12-13 23:44:01.744624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:31.445  [2024-12-13 23:44:01.976376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=0x1
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=decompress
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=software
00:10:31.704   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.704   23:44:02	-- accel/accel.sh@23 -- # accel_module=software
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.704   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.704   23:44:02	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=32
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=32
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=1
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=Yes
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:31.705   23:44:02	-- accel/accel.sh@21 -- # val=
00:10:31.705   23:44:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # IFS=:
00:10:31.705   23:44:02	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@21 -- # val=
00:10:33.609   23:44:03	-- accel/accel.sh@22 -- # case "$var" in
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # IFS=:
00:10:33.609   23:44:03	-- accel/accel.sh@20 -- # read -r var val
00:10:33.609   23:44:03	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:33.609   23:44:03	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:33.609   23:44:03	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:33.609  
00:10:33.609  real	0m4.811s
00:10:33.609  user	0m4.214s
00:10:33.609  sys	0m0.424s
00:10:33.609   23:44:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:33.609   23:44:03	-- common/autotest_common.sh@10 -- # set +x
00:10:33.609  ************************************
00:10:33.609  END TEST accel_decomp
00:10:33.609  ************************************
00:10:33.609   23:44:04	-- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:33.609   23:44:04	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:10:33.609   23:44:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:33.609   23:44:04	-- common/autotest_common.sh@10 -- # set +x
00:10:33.609  ************************************
00:10:33.609  START TEST accel_decmop_full
00:10:33.609  ************************************
00:10:33.609   23:44:04	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:33.609   23:44:04	-- accel/accel.sh@16 -- # local accel_opc
00:10:33.609   23:44:04	-- accel/accel.sh@17 -- # local accel_module
00:10:33.609    23:44:04	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:33.609    23:44:04	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:33.609     23:44:04	-- accel/accel.sh@12 -- # build_accel_config
00:10:33.609     23:44:04	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:33.609     23:44:04	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:33.609     23:44:04	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:33.609     23:44:04	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:33.609     23:44:04	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:33.609     23:44:04	-- accel/accel.sh@41 -- # local IFS=,
00:10:33.609     23:44:04	-- accel/accel.sh@42 -- # jq -r .
00:10:33.609  [2024-12-13 23:44:04.055622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:33.609  [2024-12-13 23:44:04.055983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107634 ]
00:10:33.609  [2024-12-13 23:44:04.222297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:33.868  [2024-12-13 23:44:04.401276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:35.771   23:44:06	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:35.771  
00:10:35.771  SPDK Configuration:
00:10:35.771  Core mask:      0x1
00:10:35.771  
00:10:35.771  Accel Perf Configuration:
00:10:35.771  Workload Type:  decompress
00:10:35.771  Transfer size:  111250 bytes
00:10:35.771  Vector count    1
00:10:35.771  Module:         software
00:10:35.771  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:35.771  Queue depth:    32
00:10:35.771  Allocate depth: 32
00:10:35.771  # threads/core: 1
00:10:35.771  Run time:       1 seconds
00:10:35.771  Verify:         Yes
00:10:35.771  
00:10:35.771  Running for 1 seconds...
00:10:35.771  
00:10:35.771  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:35.771  ------------------------------------------------------------------------------------
00:10:35.771  0,0                        5504/s        227 MiB/s                0                0
00:10:35.771  ====================================================================================
00:10:35.771  Total                      5504/s        583 MiB/s                0                0'
00:10:35.771    23:44:06	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:35.771   23:44:06	-- accel/accel.sh@20 -- # IFS=:
00:10:35.771   23:44:06	-- accel/accel.sh@20 -- # read -r var val
00:10:35.771    23:44:06	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:35.771     23:44:06	-- accel/accel.sh@12 -- # build_accel_config
00:10:35.771     23:44:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:35.771     23:44:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:35.771     23:44:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:35.771     23:44:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:35.771     23:44:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:35.771     23:44:06	-- accel/accel.sh@41 -- # local IFS=,
00:10:35.771     23:44:06	-- accel/accel.sh@42 -- # jq -r .
00:10:35.771  [2024-12-13 23:44:06.451414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:35.771  [2024-12-13 23:44:06.451885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107674 ]
00:10:36.030  [2024-12-13 23:44:06.628593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:36.288  [2024-12-13 23:44:06.825071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=0x1
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=decompress
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val='111250 bytes'
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=software
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@23 -- # accel_module=software
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=32
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=32
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=1
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=Yes
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.547   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.547   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.547   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.548   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:36.548   23:44:07	-- accel/accel.sh@21 -- # val=
00:10:36.548   23:44:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:36.548   23:44:07	-- accel/accel.sh@20 -- # IFS=:
00:10:36.548   23:44:07	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@21 -- # val=
00:10:38.451   23:44:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # IFS=:
00:10:38.451   23:44:08	-- accel/accel.sh@20 -- # read -r var val
00:10:38.451   23:44:08	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:38.451   23:44:08	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:38.451   23:44:08	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:38.451  
00:10:38.451  real	0m4.818s
00:10:38.451  user	0m4.241s
00:10:38.451  sys	0m0.437s
00:10:38.451   23:44:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:38.451   23:44:08	-- common/autotest_common.sh@10 -- # set +x
00:10:38.451  ************************************
00:10:38.451  END TEST accel_decmop_full
00:10:38.451  ************************************
00:10:38.451   23:44:08	-- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:38.451   23:44:08	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:10:38.451   23:44:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:38.451   23:44:08	-- common/autotest_common.sh@10 -- # set +x
00:10:38.451  ************************************
00:10:38.451  START TEST accel_decomp_mcore
00:10:38.451  ************************************
00:10:38.451   23:44:08	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:38.451   23:44:08	-- accel/accel.sh@16 -- # local accel_opc
00:10:38.451   23:44:08	-- accel/accel.sh@17 -- # local accel_module
00:10:38.451    23:44:08	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:38.451    23:44:08	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:38.451     23:44:08	-- accel/accel.sh@12 -- # build_accel_config
00:10:38.451     23:44:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:38.451     23:44:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:38.451     23:44:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:38.451     23:44:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:38.451     23:44:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:38.451     23:44:08	-- accel/accel.sh@41 -- # local IFS=,
00:10:38.451     23:44:08	-- accel/accel.sh@42 -- # jq -r .
00:10:38.451  [2024-12-13 23:44:08.932975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:38.451  [2024-12-13 23:44:08.933311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107719 ]
00:10:38.451  [2024-12-13 23:44:09.119661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:38.710  [2024-12-13 23:44:09.307553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:38.710  [2024-12-13 23:44:09.307686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:38.710  [2024-12-13 23:44:09.307836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:10:38.710  [2024-12-13 23:44:09.308077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:41.241   23:44:11	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:41.241  
00:10:41.242  SPDK Configuration:
00:10:41.242  Core mask:      0xf
00:10:41.242  
00:10:41.242  Accel Perf Configuration:
00:10:41.242  Workload Type:  decompress
00:10:41.242  Transfer size:  4096 bytes
00:10:41.242  Vector count    1
00:10:41.242  Module:         software
00:10:41.242  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:41.242  Queue depth:    32
00:10:41.242  Allocate depth: 32
00:10:41.242  # threads/core: 1
00:10:41.242  Run time:       1 seconds
00:10:41.242  Verify:         Yes
00:10:41.242  
00:10:41.242  Running for 1 seconds...
00:10:41.242  
00:10:41.242  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:41.242  ------------------------------------------------------------------------------------
00:10:41.242  0,0                       60928/s        112 MiB/s                0                0
00:10:41.242  3,0                       60864/s        112 MiB/s                0                0
00:10:41.242  2,0                       61696/s        113 MiB/s                0                0
00:10:41.242  1,0                       60512/s        111 MiB/s                0                0
00:10:41.242  ====================================================================================
00:10:41.242  Total                    244000/s        953 MiB/s                0                0'
00:10:41.242   23:44:11	-- accel/accel.sh@20 -- # IFS=:
00:10:41.242   23:44:11	-- accel/accel.sh@20 -- # read -r var val
00:10:41.242    23:44:11	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:41.242    23:44:11	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:41.242     23:44:11	-- accel/accel.sh@12 -- # build_accel_config
00:10:41.242     23:44:11	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:41.242     23:44:11	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:41.242     23:44:11	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:41.242     23:44:11	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:41.242     23:44:11	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:41.242     23:44:11	-- accel/accel.sh@41 -- # local IFS=,
00:10:41.242     23:44:11	-- accel/accel.sh@42 -- # jq -r .
00:10:41.242  [2024-12-13 23:44:11.395500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:41.242  [2024-12-13 23:44:11.395895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107764 ]
00:10:41.242  [2024-12-13 23:44:11.584300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:41.242  [2024-12-13 23:44:11.790712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:41.242  [2024-12-13 23:44:11.790837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:41.242  [2024-12-13 23:44:11.790937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:10:41.242  [2024-12-13 23:44:11.791186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:41.500   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.500   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.500   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.500   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.500   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.500   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.500   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.500   23:44:12	-- accel/accel.sh@21 -- # val=0xf
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=decompress
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=software
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@23 -- # accel_module=software
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=32
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=32
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=1
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=Yes
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:41.501   23:44:12	-- accel/accel.sh@21 -- # val=
00:10:41.501   23:44:12	-- accel/accel.sh@22 -- # case "$var" in
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # IFS=:
00:10:41.501   23:44:12	-- accel/accel.sh@20 -- # read -r var val
00:10:43.403   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.403   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.403   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.403   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.403   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@21 -- # val=
00:10:43.404   23:44:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # IFS=:
00:10:43.404   23:44:13	-- accel/accel.sh@20 -- # read -r var val
00:10:43.404   23:44:13	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:43.404   23:44:13	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:43.404   23:44:13	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:43.404  
00:10:43.404  real	0m4.965s
00:10:43.404  user	0m14.411s
00:10:43.404  sys	0m0.499s
00:10:43.404   23:44:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:43.404   23:44:13	-- common/autotest_common.sh@10 -- # set +x
00:10:43.404  ************************************
00:10:43.404  END TEST accel_decomp_mcore
00:10:43.404  ************************************
00:10:43.404   23:44:13	-- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:43.404   23:44:13	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:43.404   23:44:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:43.404   23:44:13	-- common/autotest_common.sh@10 -- # set +x
00:10:43.404  ************************************
00:10:43.404  START TEST accel_decomp_full_mcore
00:10:43.404  ************************************
00:10:43.404   23:44:13	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:43.404   23:44:13	-- accel/accel.sh@16 -- # local accel_opc
00:10:43.404   23:44:13	-- accel/accel.sh@17 -- # local accel_module
00:10:43.404    23:44:13	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:43.404    23:44:13	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:43.404     23:44:13	-- accel/accel.sh@12 -- # build_accel_config
00:10:43.404     23:44:13	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:43.404     23:44:13	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:43.404     23:44:13	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:43.404     23:44:13	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:43.404     23:44:13	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:43.404     23:44:13	-- accel/accel.sh@41 -- # local IFS=,
00:10:43.404     23:44:13	-- accel/accel.sh@42 -- # jq -r .
00:10:43.404  [2024-12-13 23:44:13.963233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:43.404  [2024-12-13 23:44:13.963706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107814 ]
00:10:43.663  [2024-12-13 23:44:14.165340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:43.663  [2024-12-13 23:44:14.359966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:43.663  [2024-12-13 23:44:14.360101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:43.663  [2024-12-13 23:44:14.360197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:10:43.663  [2024-12-13 23:44:14.360566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:46.200   23:44:16	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:46.200  
00:10:46.200  SPDK Configuration:
00:10:46.200  Core mask:      0xf
00:10:46.200  
00:10:46.200  Accel Perf Configuration:
00:10:46.200  Workload Type:  decompress
00:10:46.200  Transfer size:  111250 bytes
00:10:46.200  Vector count    1
00:10:46.200  Module:         software
00:10:46.200  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:46.200  Queue depth:    32
00:10:46.200  Allocate depth: 32
00:10:46.200  # threads/core: 1
00:10:46.200  Run time:       1 seconds
00:10:46.200  Verify:         Yes
00:10:46.200  
00:10:46.200  Running for 1 seconds...
00:10:46.200  
00:10:46.200  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:46.200  ------------------------------------------------------------------------------------
00:10:46.200  0,0                        5056/s        208 MiB/s                0                0
00:10:46.200  3,0                        5056/s        208 MiB/s                0                0
00:10:46.200  2,0                        5120/s        211 MiB/s                0                0
00:10:46.200  1,0                        5056/s        208 MiB/s                0                0
00:10:46.200  ====================================================================================
00:10:46.200  Total                     20288/s       2152 MiB/s                0                0'
00:10:46.200   23:44:16	-- accel/accel.sh@20 -- # IFS=:
00:10:46.200    23:44:16	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:46.200   23:44:16	-- accel/accel.sh@20 -- # read -r var val
00:10:46.200    23:44:16	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:10:46.200     23:44:16	-- accel/accel.sh@12 -- # build_accel_config
00:10:46.200     23:44:16	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:46.200     23:44:16	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:46.200     23:44:16	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:46.200     23:44:16	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:46.200     23:44:16	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:46.200     23:44:16	-- accel/accel.sh@41 -- # local IFS=,
00:10:46.200     23:44:16	-- accel/accel.sh@42 -- # jq -r .
00:10:46.200  [2024-12-13 23:44:16.469320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:46.200  [2024-12-13 23:44:16.469650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107865 ]
00:10:46.200  [2024-12-13 23:44:16.641637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:46.200  [2024-12-13 23:44:16.845447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:46.200  [2024-12-13 23:44:16.845623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:46.200  [2024-12-13 23:44:16.845724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:10:46.200  [2024-12-13 23:44:16.846038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=0xf
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=decompress
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val='111250 bytes'
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=software
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@23 -- # accel_module=software
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=32
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=32
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=1
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=Yes
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:46.459   23:44:17	-- accel/accel.sh@21 -- # val=
00:10:46.459   23:44:17	-- accel/accel.sh@22 -- # case "$var" in
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # IFS=:
00:10:46.459   23:44:17	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391   23:44:18	-- accel/accel.sh@21 -- # val=
00:10:48.391   23:44:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # IFS=:
00:10:48.391   23:44:18	-- accel/accel.sh@20 -- # read -r var val
00:10:48.391  ************************************
00:10:48.391  END TEST accel_decomp_full_mcore
00:10:48.391  ************************************
00:10:48.391   23:44:18	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:48.391   23:44:18	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:48.391   23:44:18	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:48.391  
00:10:48.391  real	0m5.048s
00:10:48.391  user	0m14.702s
00:10:48.391  sys	0m0.487s
00:10:48.391   23:44:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:48.391   23:44:18	-- common/autotest_common.sh@10 -- # set +x
00:10:48.391   23:44:18	-- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:48.391   23:44:18	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:10:48.391   23:44:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:48.391   23:44:18	-- common/autotest_common.sh@10 -- # set +x
00:10:48.391  ************************************
00:10:48.391  START TEST accel_decomp_mthread
00:10:48.391  ************************************
00:10:48.391   23:44:19	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:48.391   23:44:19	-- accel/accel.sh@16 -- # local accel_opc
00:10:48.391   23:44:19	-- accel/accel.sh@17 -- # local accel_module
00:10:48.391    23:44:19	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:48.391    23:44:19	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:48.391     23:44:19	-- accel/accel.sh@12 -- # build_accel_config
00:10:48.391     23:44:19	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:48.391     23:44:19	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:48.391     23:44:19	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:48.391     23:44:19	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:48.391     23:44:19	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:48.391     23:44:19	-- accel/accel.sh@41 -- # local IFS=,
00:10:48.391     23:44:19	-- accel/accel.sh@42 -- # jq -r .
00:10:48.391  [2024-12-13 23:44:19.051057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:48.391  [2024-12-13 23:44:19.051410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107913 ]
00:10:48.649  [2024-12-13 23:44:19.220529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:48.908  [2024-12-13 23:44:19.407666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:50.810   23:44:21	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:50.810  
00:10:50.810  SPDK Configuration:
00:10:50.810  Core mask:      0x1
00:10:50.810  
00:10:50.810  Accel Perf Configuration:
00:10:50.810  Workload Type:  decompress
00:10:50.810  Transfer size:  4096 bytes
00:10:50.810  Vector count    1
00:10:50.810  Module:         software
00:10:50.810  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:50.810  Queue depth:    32
00:10:50.810  Allocate depth: 32
00:10:50.810  # threads/core: 2
00:10:50.810  Run time:       1 seconds
00:10:50.810  Verify:         Yes
00:10:50.810  
00:10:50.810  Running for 1 seconds...
00:10:50.810  
00:10:50.810  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:50.810  ------------------------------------------------------------------------------------
00:10:50.810  0,1                       38080/s         70 MiB/s                0                0
00:10:50.810  0,0                       37952/s         69 MiB/s                0                0
00:10:50.810  ====================================================================================
00:10:50.810  Total                     76032/s        297 MiB/s                0                0'
00:10:50.810   23:44:21	-- accel/accel.sh@20 -- # IFS=:
00:10:50.810   23:44:21	-- accel/accel.sh@20 -- # read -r var val
00:10:50.810    23:44:21	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:50.810    23:44:21	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:10:50.810     23:44:21	-- accel/accel.sh@12 -- # build_accel_config
00:10:50.810     23:44:21	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:50.810     23:44:21	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:50.810     23:44:21	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:50.810     23:44:21	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:50.810     23:44:21	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:50.810     23:44:21	-- accel/accel.sh@41 -- # local IFS=,
00:10:50.810     23:44:21	-- accel/accel.sh@42 -- # jq -r .
00:10:50.810  [2024-12-13 23:44:21.444395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:50.810  [2024-12-13 23:44:21.444814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107955 ]
00:10:51.068  [2024-12-13 23:44:21.610380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:51.326  [2024-12-13 23:44:21.825896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=0x1
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=decompress
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=software
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@23 -- # accel_module=software
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=32
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=32
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=2
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=Yes
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:51.326   23:44:22	-- accel/accel.sh@21 -- # val=
00:10:51.326   23:44:22	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # IFS=:
00:10:51.326   23:44:22	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228   23:44:23	-- accel/accel.sh@21 -- # val=
00:10:53.228   23:44:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # IFS=:
00:10:53.228   23:44:23	-- accel/accel.sh@20 -- # read -r var val
00:10:53.228  ************************************
00:10:53.228  END TEST accel_decomp_mthread
00:10:53.229  ************************************
00:10:53.229   23:44:23	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:53.229   23:44:23	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:53.229   23:44:23	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:53.229  
00:10:53.229  real	0m4.833s
00:10:53.229  user	0m4.214s
00:10:53.229  sys	0m0.427s
00:10:53.229   23:44:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:53.229   23:44:23	-- common/autotest_common.sh@10 -- # set +x
00:10:53.229   23:44:23	-- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:53.229   23:44:23	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:53.229   23:44:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:53.229   23:44:23	-- common/autotest_common.sh@10 -- # set +x
00:10:53.229  ************************************
00:10:53.229  START TEST accel_deomp_full_mthread
00:10:53.229  ************************************
00:10:53.229   23:44:23	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:53.229   23:44:23	-- accel/accel.sh@16 -- # local accel_opc
00:10:53.229   23:44:23	-- accel/accel.sh@17 -- # local accel_module
00:10:53.229    23:44:23	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:53.229    23:44:23	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:53.229     23:44:23	-- accel/accel.sh@12 -- # build_accel_config
00:10:53.229     23:44:23	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:53.229     23:44:23	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:53.229     23:44:23	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:53.229     23:44:23	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:53.229     23:44:23	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:53.229     23:44:23	-- accel/accel.sh@41 -- # local IFS=,
00:10:53.229     23:44:23	-- accel/accel.sh@42 -- # jq -r .
00:10:53.229  [2024-12-13 23:44:23.936353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:53.229  [2024-12-13 23:44:23.936542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108002 ]
00:10:53.487  [2024-12-13 23:44:24.105494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:53.746  [2024-12-13 23:44:24.286207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:55.647   23:44:26	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:55.647  
00:10:55.647  SPDK Configuration:
00:10:55.647  Core mask:      0x1
00:10:55.647  
00:10:55.647  Accel Perf Configuration:
00:10:55.647  Workload Type:  decompress
00:10:55.647  Transfer size:  111250 bytes
00:10:55.647  Vector count    1
00:10:55.647  Module:         software
00:10:55.647  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:55.647  Queue depth:    32
00:10:55.647  Allocate depth: 32
00:10:55.647  # threads/core: 2
00:10:55.647  Run time:       1 seconds
00:10:55.647  Verify:         Yes
00:10:55.647  
00:10:55.647  Running for 1 seconds...
00:10:55.647  
00:10:55.647  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:55.647  ------------------------------------------------------------------------------------
00:10:55.647  0,1                        2848/s        117 MiB/s                0                0
00:10:55.647  0,0                        2816/s        116 MiB/s                0                0
00:10:55.647  ====================================================================================
00:10:55.647  Total                      5664/s        600 MiB/s                0                0'
00:10:55.647   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:55.647   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:55.647    23:44:26	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:55.647    23:44:26	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:10:55.647     23:44:26	-- accel/accel.sh@12 -- # build_accel_config
00:10:55.647     23:44:26	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:55.647     23:44:26	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:55.647     23:44:26	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:55.647     23:44:26	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:55.647     23:44:26	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:55.647     23:44:26	-- accel/accel.sh@41 -- # local IFS=,
00:10:55.647     23:44:26	-- accel/accel.sh@42 -- # jq -r .
00:10:55.647  [2024-12-13 23:44:26.350847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:55.647  [2024-12-13 23:44:26.351143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108042 ]
00:10:55.906  [2024-12-13 23:44:26.525492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:56.164  [2024-12-13 23:44:26.720891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=0x1
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=decompress
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val='111250 bytes'
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=software
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@23 -- # accel_module=software
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=32
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=32
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=2
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=Yes
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:56.423   23:44:26	-- accel/accel.sh@21 -- # val=
00:10:56.423   23:44:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # IFS=:
00:10:56.423   23:44:26	-- accel/accel.sh@20 -- # read -r var val
00:10:58.326   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.326   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.326   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.326   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.326   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.326   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.326   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.326   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.326   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.327   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.327   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.327   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.327   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.327   23:44:28	-- accel/accel.sh@21 -- # val=
00:10:58.327   23:44:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # IFS=:
00:10:58.327   23:44:28	-- accel/accel.sh@20 -- # read -r var val
00:10:58.327   23:44:28	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:58.327   23:44:28	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:58.327   23:44:28	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:58.327  
00:10:58.327  real	0m4.870s
00:10:58.327  user	0m4.277s
00:10:58.327  sys	0m0.438s
00:10:58.327   23:44:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:58.327   23:44:28	-- common/autotest_common.sh@10 -- # set +x
00:10:58.327  ************************************
00:10:58.327  END TEST accel_deomp_full_mthread
00:10:58.327  ************************************
00:10:58.327   23:44:28	-- accel/accel.sh@116 -- # [[ n == y ]]
00:10:58.327   23:44:28	-- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:10:58.327    23:44:28	-- accel/accel.sh@129 -- # build_accel_config
00:10:58.327   23:44:28	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:10:58.327   23:44:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:58.327    23:44:28	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:58.327   23:44:28	-- common/autotest_common.sh@10 -- # set +x
00:10:58.327    23:44:28	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:58.327    23:44:28	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:58.327    23:44:28	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:58.327    23:44:28	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:58.327    23:44:28	-- accel/accel.sh@41 -- # local IFS=,
00:10:58.327    23:44:28	-- accel/accel.sh@42 -- # jq -r .
00:10:58.327  ************************************
00:10:58.327  START TEST accel_dif_functional_tests
00:10:58.327  ************************************
00:10:58.327   23:44:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:10:58.327  [2024-12-13 23:44:28.894389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:58.327  [2024-12-13 23:44:28.895422] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108096 ]
00:10:58.585  [2024-12-13 23:44:29.074271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:58.585  [2024-12-13 23:44:29.259034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:58.585  [2024-12-13 23:44:29.259184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:58.585  [2024-12-13 23:44:29.259181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:58.844  
00:10:58.844  
00:10:58.844       CUnit - A unit testing framework for C - Version 2.1-3
00:10:58.844       http://cunit.sourceforge.net/
00:10:58.844  
00:10:58.844  
00:10:58.844  Suite: accel_dif
00:10:58.844    Test: verify: DIF generated, GUARD check ...passed
00:10:58.844    Test: verify: DIF generated, APPTAG check ...passed
00:10:58.844    Test: verify: DIF generated, REFTAG check ...passed
00:10:58.844    Test: verify: DIF not generated, GUARD check ...[2024-12-13 23:44:29.569344] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:10:58.844  [2024-12-13 23:44:29.569565] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:10:58.844  passed
00:10:58.844    Test: verify: DIF not generated, APPTAG check ...[2024-12-13 23:44:29.569763] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:10:58.844  [2024-12-13 23:44:29.569939] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:10:58.844  passed
00:10:58.844    Test: verify: DIF not generated, REFTAG check ...[2024-12-13 23:44:29.570079] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:10:58.844  [2024-12-13 23:44:29.570246] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:10:58.844  passed
00:10:58.844    Test: verify: APPTAG correct, APPTAG check ...passed
00:10:58.844    Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-13 23:44:29.570501] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30,  Expected=28, Actual=14
00:10:58.844  passed
00:10:58.844    Test: verify: APPTAG incorrect, no APPTAG check ...passed
00:10:58.844    Test: verify: REFTAG incorrect, REFTAG ignore ...passed
00:10:58.844    Test: verify: REFTAG_INIT correct, REFTAG check ...passed
00:10:58.844    Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-13 23:44:29.571052] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10
00:10:58.844  passed
00:10:58.844    Test: generate copy: DIF generated, GUARD check ...passed
00:10:58.844    Test: generate copy: DIF generated, APTTAG check ...passed
00:10:58.844    Test: generate copy: DIF generated, REFTAG check ...passed
00:10:58.844    Test: generate copy: DIF generated, no GUARD check flag set ...passed
00:10:58.844    Test: generate copy: DIF generated, no APPTAG check flag set ...passed
00:10:58.844    Test: generate copy: DIF generated, no REFTAG check flag set ...passed
00:10:58.844    Test: generate copy: iovecs-len validate ...[2024-12-13 23:44:29.572023] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size.
00:10:58.844  passed
00:10:58.844    Test: generate copy: buffer alignment validate ...passed
00:10:58.844  
00:10:58.844  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:10:58.844                suites      1      1    n/a      0        0
00:10:58.844                 tests     20     20     20      0        0
00:10:58.844               asserts    204    204    204      0      n/a
00:10:58.844  
00:10:58.845  Elapsed time =    0.007 seconds
00:11:00.220  
00:11:00.220  real	0m1.788s
00:11:00.220  user	0m3.411s
00:11:00.220  sys	0m0.271s
00:11:00.220   23:44:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:00.220  ************************************
00:11:00.220  END TEST accel_dif_functional_tests
00:11:00.220  ************************************
00:11:00.220   23:44:30	-- common/autotest_common.sh@10 -- # set +x
00:11:00.220  
00:11:00.220  real	1m46.724s
00:11:00.220  user	1m55.655s
00:11:00.220  sys	0m10.685s
00:11:00.220   23:44:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:00.220   23:44:30	-- common/autotest_common.sh@10 -- # set +x
00:11:00.220  ************************************
00:11:00.220  END TEST accel
00:11:00.220  ************************************
00:11:00.220   23:44:30	-- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:11:00.220   23:44:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:00.220   23:44:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:00.220   23:44:30	-- common/autotest_common.sh@10 -- # set +x
00:11:00.220  ************************************
00:11:00.220  START TEST accel_rpc
00:11:00.220  ************************************
00:11:00.220   23:44:30	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:11:00.220  * Looking for test storage...
00:11:00.220  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:11:00.220    23:44:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:00.220     23:44:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:00.220     23:44:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:00.220    23:44:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:00.220    23:44:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:00.220    23:44:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:00.220    23:44:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:00.220    23:44:30	-- scripts/common.sh@335 -- # IFS=.-:
00:11:00.220    23:44:30	-- scripts/common.sh@335 -- # read -ra ver1
00:11:00.220    23:44:30	-- scripts/common.sh@336 -- # IFS=.-:
00:11:00.220    23:44:30	-- scripts/common.sh@336 -- # read -ra ver2
00:11:00.220    23:44:30	-- scripts/common.sh@337 -- # local 'op=<'
00:11:00.220    23:44:30	-- scripts/common.sh@339 -- # ver1_l=2
00:11:00.220    23:44:30	-- scripts/common.sh@340 -- # ver2_l=1
00:11:00.220    23:44:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:00.220    23:44:30	-- scripts/common.sh@343 -- # case "$op" in
00:11:00.220    23:44:30	-- scripts/common.sh@344 -- # : 1
00:11:00.220    23:44:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:00.220    23:44:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:00.220     23:44:30	-- scripts/common.sh@364 -- # decimal 1
00:11:00.220     23:44:30	-- scripts/common.sh@352 -- # local d=1
00:11:00.220     23:44:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:00.220     23:44:30	-- scripts/common.sh@354 -- # echo 1
00:11:00.220    23:44:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:00.220     23:44:30	-- scripts/common.sh@365 -- # decimal 2
00:11:00.220     23:44:30	-- scripts/common.sh@352 -- # local d=2
00:11:00.220     23:44:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:00.220     23:44:30	-- scripts/common.sh@354 -- # echo 2
00:11:00.220    23:44:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:00.220    23:44:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:00.220    23:44:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:00.220    23:44:30	-- scripts/common.sh@367 -- # return 0
00:11:00.220    23:44:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:00.220    23:44:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:00.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.220  		--rc genhtml_branch_coverage=1
00:11:00.220  		--rc genhtml_function_coverage=1
00:11:00.220  		--rc genhtml_legend=1
00:11:00.220  		--rc geninfo_all_blocks=1
00:11:00.220  		--rc geninfo_unexecuted_blocks=1
00:11:00.220  		
00:11:00.220  		'
00:11:00.220    23:44:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:00.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.220  		--rc genhtml_branch_coverage=1
00:11:00.220  		--rc genhtml_function_coverage=1
00:11:00.220  		--rc genhtml_legend=1
00:11:00.220  		--rc geninfo_all_blocks=1
00:11:00.220  		--rc geninfo_unexecuted_blocks=1
00:11:00.220  		
00:11:00.220  		'
00:11:00.220    23:44:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:00.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.220  		--rc genhtml_branch_coverage=1
00:11:00.220  		--rc genhtml_function_coverage=1
00:11:00.221  		--rc genhtml_legend=1
00:11:00.221  		--rc geninfo_all_blocks=1
00:11:00.221  		--rc geninfo_unexecuted_blocks=1
00:11:00.221  		
00:11:00.221  		'
00:11:00.221    23:44:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:00.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.221  		--rc genhtml_branch_coverage=1
00:11:00.221  		--rc genhtml_function_coverage=1
00:11:00.221  		--rc genhtml_legend=1
00:11:00.221  		--rc geninfo_all_blocks=1
00:11:00.221  		--rc geninfo_unexecuted_blocks=1
00:11:00.221  		
00:11:00.221  		'
00:11:00.221   23:44:30	-- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:11:00.221   23:44:30	-- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=108194
00:11:00.221   23:44:30	-- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:11:00.221   23:44:30	-- accel/accel_rpc.sh@15 -- # waitforlisten 108194
00:11:00.221   23:44:30	-- common/autotest_common.sh@829 -- # '[' -z 108194 ']'
00:11:00.221   23:44:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:00.221   23:44:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:00.221   23:44:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:00.221  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:00.221   23:44:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:00.221   23:44:30	-- common/autotest_common.sh@10 -- # set +x
00:11:00.221  [2024-12-13 23:44:30.927070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:00.221  [2024-12-13 23:44:30.927385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108194 ]
00:11:00.479  [2024-12-13 23:44:31.102165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:00.738  [2024-12-13 23:44:31.285743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:00.738  [2024-12-13 23:44:31.286029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:01.305   23:44:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:01.305   23:44:31	-- common/autotest_common.sh@862 -- # return 0
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@45 -- # [[ y == y ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@49 -- # [[ y == y ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite
00:11:01.305   23:44:31	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:01.305   23:44:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:01.305   23:44:31	-- common/autotest_common.sh@10 -- # set +x
00:11:01.305  ************************************
00:11:01.305  START TEST accel_assign_opcode
00:11:01.305  ************************************
00:11:01.305   23:44:31	-- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect
00:11:01.305   23:44:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.305   23:44:31	-- common/autotest_common.sh@10 -- # set +x
00:11:01.305  [2024-12-13 23:44:31.879162] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect
00:11:01.305   23:44:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software
00:11:01.305   23:44:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.305   23:44:31	-- common/autotest_common.sh@10 -- # set +x
00:11:01.305  [2024-12-13 23:44:31.887151] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software
00:11:01.305   23:44:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.305   23:44:31	-- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init
00:11:01.305   23:44:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.305   23:44:31	-- common/autotest_common.sh@10 -- # set +x
00:11:02.240   23:44:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.240   23:44:32	-- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments
00:11:02.240   23:44:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.240   23:44:32	-- accel/accel_rpc.sh@42 -- # grep software
00:11:02.240   23:44:32	-- common/autotest_common.sh@10 -- # set +x
00:11:02.240   23:44:32	-- accel/accel_rpc.sh@42 -- # jq -r .copy
00:11:02.240   23:44:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.240  software
00:11:02.240  
00:11:02.240  real	0m0.777s
00:11:02.240  user	0m0.037s
00:11:02.240  sys	0m0.016s
00:11:02.240   23:44:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:02.240   23:44:32	-- common/autotest_common.sh@10 -- # set +x
00:11:02.240  ************************************
00:11:02.240  END TEST accel_assign_opcode
00:11:02.240  ************************************
00:11:02.240   23:44:32	-- accel/accel_rpc.sh@55 -- # killprocess 108194
00:11:02.240   23:44:32	-- common/autotest_common.sh@936 -- # '[' -z 108194 ']'
00:11:02.240   23:44:32	-- common/autotest_common.sh@940 -- # kill -0 108194
00:11:02.240    23:44:32	-- common/autotest_common.sh@941 -- # uname
00:11:02.240   23:44:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:02.240    23:44:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108194
00:11:02.240   23:44:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:02.240  killing process with pid 108194
00:11:02.240   23:44:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:02.240   23:44:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 108194'
00:11:02.240   23:44:32	-- common/autotest_common.sh@955 -- # kill 108194
00:11:02.240   23:44:32	-- common/autotest_common.sh@960 -- # wait 108194
00:11:04.143  
00:11:04.143  real	0m3.956s
00:11:04.143  user	0m3.803s
00:11:04.143  sys	0m0.672s
00:11:04.143   23:44:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:04.143  ************************************
00:11:04.143  END TEST accel_rpc
00:11:04.143  ************************************
00:11:04.143   23:44:34	-- common/autotest_common.sh@10 -- # set +x
00:11:04.143   23:44:34	-- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:04.143   23:44:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:04.143   23:44:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:04.143   23:44:34	-- common/autotest_common.sh@10 -- # set +x
00:11:04.143  ************************************
00:11:04.143  START TEST app_cmdline
00:11:04.143  ************************************
00:11:04.143   23:44:34	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:04.143  * Looking for test storage...
00:11:04.143  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:04.143    23:44:34	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:04.143     23:44:34	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:04.143     23:44:34	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:04.143    23:44:34	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:04.143    23:44:34	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:04.143    23:44:34	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:04.143    23:44:34	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:04.143    23:44:34	-- scripts/common.sh@335 -- # IFS=.-:
00:11:04.143    23:44:34	-- scripts/common.sh@335 -- # read -ra ver1
00:11:04.144    23:44:34	-- scripts/common.sh@336 -- # IFS=.-:
00:11:04.144    23:44:34	-- scripts/common.sh@336 -- # read -ra ver2
00:11:04.144    23:44:34	-- scripts/common.sh@337 -- # local 'op=<'
00:11:04.144    23:44:34	-- scripts/common.sh@339 -- # ver1_l=2
00:11:04.144    23:44:34	-- scripts/common.sh@340 -- # ver2_l=1
00:11:04.144    23:44:34	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:04.144    23:44:34	-- scripts/common.sh@343 -- # case "$op" in
00:11:04.144    23:44:34	-- scripts/common.sh@344 -- # : 1
00:11:04.144    23:44:34	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:04.144    23:44:34	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:04.144     23:44:34	-- scripts/common.sh@364 -- # decimal 1
00:11:04.144     23:44:34	-- scripts/common.sh@352 -- # local d=1
00:11:04.144     23:44:34	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:04.144     23:44:34	-- scripts/common.sh@354 -- # echo 1
00:11:04.144    23:44:34	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:04.144     23:44:34	-- scripts/common.sh@365 -- # decimal 2
00:11:04.402     23:44:34	-- scripts/common.sh@352 -- # local d=2
00:11:04.402     23:44:34	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:04.402     23:44:34	-- scripts/common.sh@354 -- # echo 2
00:11:04.402    23:44:34	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:04.402    23:44:34	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:04.402    23:44:34	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:04.402    23:44:34	-- scripts/common.sh@367 -- # return 0
00:11:04.402    23:44:34	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:04.402    23:44:34	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:04.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.402  		--rc genhtml_branch_coverage=1
00:11:04.402  		--rc genhtml_function_coverage=1
00:11:04.402  		--rc genhtml_legend=1
00:11:04.402  		--rc geninfo_all_blocks=1
00:11:04.402  		--rc geninfo_unexecuted_blocks=1
00:11:04.402  		
00:11:04.402  		'
00:11:04.402    23:44:34	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:04.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.402  		--rc genhtml_branch_coverage=1
00:11:04.402  		--rc genhtml_function_coverage=1
00:11:04.402  		--rc genhtml_legend=1
00:11:04.402  		--rc geninfo_all_blocks=1
00:11:04.402  		--rc geninfo_unexecuted_blocks=1
00:11:04.402  		
00:11:04.402  		'
00:11:04.402    23:44:34	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:04.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.402  		--rc genhtml_branch_coverage=1
00:11:04.402  		--rc genhtml_function_coverage=1
00:11:04.402  		--rc genhtml_legend=1
00:11:04.402  		--rc geninfo_all_blocks=1
00:11:04.402  		--rc geninfo_unexecuted_blocks=1
00:11:04.402  		
00:11:04.402  		'
00:11:04.402    23:44:34	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:04.402  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:04.402  		--rc genhtml_branch_coverage=1
00:11:04.402  		--rc genhtml_function_coverage=1
00:11:04.402  		--rc genhtml_legend=1
00:11:04.402  		--rc geninfo_all_blocks=1
00:11:04.402  		--rc geninfo_unexecuted_blocks=1
00:11:04.402  		
00:11:04.402  		'
00:11:04.402   23:44:34	-- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:11:04.402   23:44:34	-- app/cmdline.sh@17 -- # spdk_tgt_pid=108331
00:11:04.403   23:44:34	-- app/cmdline.sh@18 -- # waitforlisten 108331
00:11:04.403   23:44:34	-- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:11:04.403   23:44:34	-- common/autotest_common.sh@829 -- # '[' -z 108331 ']'
00:11:04.403   23:44:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:04.403   23:44:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:04.403   23:44:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:04.403  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:04.403   23:44:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:04.403   23:44:34	-- common/autotest_common.sh@10 -- # set +x
00:11:04.403  [2024-12-13 23:44:34.961041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:04.403  [2024-12-13 23:44:34.961263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108331 ]
00:11:04.403  [2024-12-13 23:44:35.128646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:04.661  [2024-12-13 23:44:35.310322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:04.661  [2024-12-13 23:44:35.310581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:06.037   23:44:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:06.037   23:44:36	-- common/autotest_common.sh@862 -- # return 0
00:11:06.037   23:44:36	-- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:11:06.310  {
00:11:06.310    "version": "SPDK v24.01.1-pre git sha1 c13c99a5e",
00:11:06.310    "fields": {
00:11:06.310      "major": 24,
00:11:06.310      "minor": 1,
00:11:06.310      "patch": 1,
00:11:06.310      "suffix": "-pre",
00:11:06.310      "commit": "c13c99a5e"
00:11:06.310    }
00:11:06.310  }
00:11:06.310   23:44:36	-- app/cmdline.sh@22 -- # expected_methods=()
00:11:06.310   23:44:36	-- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:11:06.310   23:44:36	-- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:11:06.310   23:44:36	-- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:11:06.310    23:44:36	-- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:11:06.310    23:44:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.310    23:44:36	-- common/autotest_common.sh@10 -- # set +x
00:11:06.310    23:44:36	-- app/cmdline.sh@26 -- # sort
00:11:06.310    23:44:36	-- app/cmdline.sh@26 -- # jq -r '.[]'
00:11:06.310    23:44:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.310   23:44:36	-- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:11:06.310   23:44:36	-- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:11:06.310   23:44:36	-- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:06.310   23:44:36	-- common/autotest_common.sh@650 -- # local es=0
00:11:06.310   23:44:36	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:06.310   23:44:36	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:06.310   23:44:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:06.310    23:44:36	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:06.310   23:44:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:06.310    23:44:36	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:06.310   23:44:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:06.310   23:44:36	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:06.310   23:44:36	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:11:06.310   23:44:36	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:06.599  request:
00:11:06.599  {
00:11:06.599    "method": "env_dpdk_get_mem_stats",
00:11:06.599    "req_id": 1
00:11:06.599  }
00:11:06.599  Got JSON-RPC error response
00:11:06.599  response:
00:11:06.599  {
00:11:06.599    "code": -32601,
00:11:06.599    "message": "Method not found"
00:11:06.599  }
00:11:06.599   23:44:37	-- common/autotest_common.sh@653 -- # es=1
00:11:06.599   23:44:37	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:06.599   23:44:37	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:06.599   23:44:37	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:06.599   23:44:37	-- app/cmdline.sh@1 -- # killprocess 108331
00:11:06.599   23:44:37	-- common/autotest_common.sh@936 -- # '[' -z 108331 ']'
00:11:06.599   23:44:37	-- common/autotest_common.sh@940 -- # kill -0 108331
00:11:06.599    23:44:37	-- common/autotest_common.sh@941 -- # uname
00:11:06.599   23:44:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:06.599    23:44:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108331
00:11:06.599   23:44:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:06.599   23:44:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:06.599  killing process with pid 108331
00:11:06.599   23:44:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 108331'
00:11:06.599   23:44:37	-- common/autotest_common.sh@955 -- # kill 108331
00:11:06.599   23:44:37	-- common/autotest_common.sh@960 -- # wait 108331
00:11:08.502  ************************************
00:11:08.502  END TEST app_cmdline
00:11:08.502  ************************************
00:11:08.502  
00:11:08.502  real	0m4.341s
00:11:08.502  user	0m4.726s
00:11:08.502  sys	0m0.721s
00:11:08.502   23:44:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:08.502   23:44:39	-- common/autotest_common.sh@10 -- # set +x
00:11:08.502   23:44:39	-- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:08.502   23:44:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:08.502   23:44:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:08.502   23:44:39	-- common/autotest_common.sh@10 -- # set +x
00:11:08.502  ************************************
00:11:08.502  START TEST version
00:11:08.502  ************************************
00:11:08.502   23:44:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:08.502  * Looking for test storage...
00:11:08.502  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:08.502    23:44:39	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:08.502     23:44:39	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:08.502     23:44:39	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:08.761    23:44:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:08.761    23:44:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:08.761    23:44:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:08.761    23:44:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:08.761    23:44:39	-- scripts/common.sh@335 -- # IFS=.-:
00:11:08.761    23:44:39	-- scripts/common.sh@335 -- # read -ra ver1
00:11:08.761    23:44:39	-- scripts/common.sh@336 -- # IFS=.-:
00:11:08.761    23:44:39	-- scripts/common.sh@336 -- # read -ra ver2
00:11:08.761    23:44:39	-- scripts/common.sh@337 -- # local 'op=<'
00:11:08.761    23:44:39	-- scripts/common.sh@339 -- # ver1_l=2
00:11:08.761    23:44:39	-- scripts/common.sh@340 -- # ver2_l=1
00:11:08.761    23:44:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:08.761    23:44:39	-- scripts/common.sh@343 -- # case "$op" in
00:11:08.761    23:44:39	-- scripts/common.sh@344 -- # : 1
00:11:08.761    23:44:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:08.761    23:44:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:08.761     23:44:39	-- scripts/common.sh@364 -- # decimal 1
00:11:08.761     23:44:39	-- scripts/common.sh@352 -- # local d=1
00:11:08.761     23:44:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.761     23:44:39	-- scripts/common.sh@354 -- # echo 1
00:11:08.761    23:44:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:08.761     23:44:39	-- scripts/common.sh@365 -- # decimal 2
00:11:08.761     23:44:39	-- scripts/common.sh@352 -- # local d=2
00:11:08.761     23:44:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:08.761     23:44:39	-- scripts/common.sh@354 -- # echo 2
00:11:08.761    23:44:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:08.761    23:44:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:08.761    23:44:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:08.761    23:44:39	-- scripts/common.sh@367 -- # return 0
00:11:08.761    23:44:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:08.761    23:44:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:08.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.761  		--rc genhtml_branch_coverage=1
00:11:08.761  		--rc genhtml_function_coverage=1
00:11:08.761  		--rc genhtml_legend=1
00:11:08.761  		--rc geninfo_all_blocks=1
00:11:08.761  		--rc geninfo_unexecuted_blocks=1
00:11:08.761  		
00:11:08.761  		'
00:11:08.761    23:44:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:08.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.761  		--rc genhtml_branch_coverage=1
00:11:08.761  		--rc genhtml_function_coverage=1
00:11:08.761  		--rc genhtml_legend=1
00:11:08.761  		--rc geninfo_all_blocks=1
00:11:08.761  		--rc geninfo_unexecuted_blocks=1
00:11:08.761  		
00:11:08.761  		'
00:11:08.761    23:44:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:08.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.761  		--rc genhtml_branch_coverage=1
00:11:08.761  		--rc genhtml_function_coverage=1
00:11:08.761  		--rc genhtml_legend=1
00:11:08.761  		--rc geninfo_all_blocks=1
00:11:08.761  		--rc geninfo_unexecuted_blocks=1
00:11:08.761  		
00:11:08.761  		'
00:11:08.761    23:44:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:08.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.761  		--rc genhtml_branch_coverage=1
00:11:08.761  		--rc genhtml_function_coverage=1
00:11:08.761  		--rc genhtml_legend=1
00:11:08.761  		--rc geninfo_all_blocks=1
00:11:08.761  		--rc geninfo_unexecuted_blocks=1
00:11:08.761  		
00:11:08.761  		'
00:11:08.761    23:44:39	-- app/version.sh@17 -- # get_header_version major
00:11:08.761    23:44:39	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:08.761    23:44:39	-- app/version.sh@14 -- # cut -f2
00:11:08.761    23:44:39	-- app/version.sh@14 -- # tr -d '"'
00:11:08.761   23:44:39	-- app/version.sh@17 -- # major=24
00:11:08.761    23:44:39	-- app/version.sh@18 -- # get_header_version minor
00:11:08.761    23:44:39	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:08.761    23:44:39	-- app/version.sh@14 -- # cut -f2
00:11:08.761    23:44:39	-- app/version.sh@14 -- # tr -d '"'
00:11:08.761   23:44:39	-- app/version.sh@18 -- # minor=1
00:11:08.761    23:44:39	-- app/version.sh@19 -- # get_header_version patch
00:11:08.761    23:44:39	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:08.761    23:44:39	-- app/version.sh@14 -- # cut -f2
00:11:08.761    23:44:39	-- app/version.sh@14 -- # tr -d '"'
00:11:08.761   23:44:39	-- app/version.sh@19 -- # patch=1
00:11:08.761    23:44:39	-- app/version.sh@20 -- # get_header_version suffix
00:11:08.761    23:44:39	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:08.761    23:44:39	-- app/version.sh@14 -- # cut -f2
00:11:08.761    23:44:39	-- app/version.sh@14 -- # tr -d '"'
00:11:08.761   23:44:39	-- app/version.sh@20 -- # suffix=-pre
00:11:08.761   23:44:39	-- app/version.sh@22 -- # version=24.1
00:11:08.761   23:44:39	-- app/version.sh@25 -- # (( patch != 0 ))
00:11:08.761   23:44:39	-- app/version.sh@25 -- # version=24.1.1
00:11:08.761   23:44:39	-- app/version.sh@28 -- # version=24.1.1rc0
00:11:08.761   23:44:39	-- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:08.761    23:44:39	-- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:11:08.761   23:44:39	-- app/version.sh@30 -- # py_version=24.1.1rc0
00:11:08.761   23:44:39	-- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]]
00:11:08.761  
00:11:08.761  real	0m0.218s
00:11:08.761  user	0m0.185s
00:11:08.761  sys	0m0.075s
00:11:08.761   23:44:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:08.761   23:44:39	-- common/autotest_common.sh@10 -- # set +x
00:11:08.761  ************************************
00:11:08.761  END TEST version
00:11:08.761  ************************************
00:11:08.761   23:44:39	-- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']'
00:11:08.761   23:44:39	-- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:08.761   23:44:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:08.761   23:44:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:08.761   23:44:39	-- common/autotest_common.sh@10 -- # set +x
00:11:08.761  ************************************
00:11:08.761  START TEST blockdev_general
00:11:08.761  ************************************
00:11:08.761   23:44:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:08.761  * Looking for test storage...
00:11:08.761  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:11:08.761    23:44:39	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:08.761     23:44:39	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:08.761     23:44:39	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:09.020    23:44:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:09.020    23:44:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:09.020    23:44:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:09.020    23:44:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:09.020    23:44:39	-- scripts/common.sh@335 -- # IFS=.-:
00:11:09.020    23:44:39	-- scripts/common.sh@335 -- # read -ra ver1
00:11:09.020    23:44:39	-- scripts/common.sh@336 -- # IFS=.-:
00:11:09.020    23:44:39	-- scripts/common.sh@336 -- # read -ra ver2
00:11:09.020    23:44:39	-- scripts/common.sh@337 -- # local 'op=<'
00:11:09.020    23:44:39	-- scripts/common.sh@339 -- # ver1_l=2
00:11:09.020    23:44:39	-- scripts/common.sh@340 -- # ver2_l=1
00:11:09.020    23:44:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:09.020    23:44:39	-- scripts/common.sh@343 -- # case "$op" in
00:11:09.020    23:44:39	-- scripts/common.sh@344 -- # : 1
00:11:09.020    23:44:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:09.020    23:44:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:09.021     23:44:39	-- scripts/common.sh@364 -- # decimal 1
00:11:09.021     23:44:39	-- scripts/common.sh@352 -- # local d=1
00:11:09.021     23:44:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:09.021     23:44:39	-- scripts/common.sh@354 -- # echo 1
00:11:09.021    23:44:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:09.021     23:44:39	-- scripts/common.sh@365 -- # decimal 2
00:11:09.021     23:44:39	-- scripts/common.sh@352 -- # local d=2
00:11:09.021     23:44:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:09.021     23:44:39	-- scripts/common.sh@354 -- # echo 2
00:11:09.021    23:44:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:09.021    23:44:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:09.021    23:44:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:09.021    23:44:39	-- scripts/common.sh@367 -- # return 0
00:11:09.021    23:44:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:09.021    23:44:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:09.021  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:09.021  		--rc genhtml_branch_coverage=1
00:11:09.021  		--rc genhtml_function_coverage=1
00:11:09.021  		--rc genhtml_legend=1
00:11:09.021  		--rc geninfo_all_blocks=1
00:11:09.021  		--rc geninfo_unexecuted_blocks=1
00:11:09.021  		
00:11:09.021  		'
00:11:09.021    23:44:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:09.021  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:09.021  		--rc genhtml_branch_coverage=1
00:11:09.021  		--rc genhtml_function_coverage=1
00:11:09.021  		--rc genhtml_legend=1
00:11:09.021  		--rc geninfo_all_blocks=1
00:11:09.021  		--rc geninfo_unexecuted_blocks=1
00:11:09.021  		
00:11:09.021  		'
00:11:09.021    23:44:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:09.021  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:09.021  		--rc genhtml_branch_coverage=1
00:11:09.021  		--rc genhtml_function_coverage=1
00:11:09.021  		--rc genhtml_legend=1
00:11:09.021  		--rc geninfo_all_blocks=1
00:11:09.021  		--rc geninfo_unexecuted_blocks=1
00:11:09.021  		
00:11:09.021  		'
00:11:09.021    23:44:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:09.021  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:09.021  		--rc genhtml_branch_coverage=1
00:11:09.021  		--rc genhtml_function_coverage=1
00:11:09.021  		--rc genhtml_legend=1
00:11:09.021  		--rc geninfo_all_blocks=1
00:11:09.021  		--rc geninfo_unexecuted_blocks=1
00:11:09.021  		
00:11:09.021  		'
00:11:09.021   23:44:39	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:11:09.021    23:44:39	-- bdev/nbd_common.sh@6 -- # set -e
00:11:09.021   23:44:39	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:11:09.021   23:44:39	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:09.021   23:44:39	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:11:09.021   23:44:39	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:11:09.021   23:44:39	-- bdev/blockdev.sh@18 -- # :
00:11:09.021   23:44:39	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:11:09.021   23:44:39	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:11:09.021   23:44:39	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:11:09.021    23:44:39	-- bdev/blockdev.sh@672 -- # uname -s
00:11:09.021   23:44:39	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:11:09.021   23:44:39	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:11:09.021   23:44:39	-- bdev/blockdev.sh@680 -- # test_type=bdev
00:11:09.021   23:44:39	-- bdev/blockdev.sh@681 -- # crypto_device=
00:11:09.021   23:44:39	-- bdev/blockdev.sh@682 -- # dek=
00:11:09.021   23:44:39	-- bdev/blockdev.sh@683 -- # env_ctx=
00:11:09.021   23:44:39	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:11:09.021   23:44:39	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:11:09.021   23:44:39	-- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]]
00:11:09.021   23:44:39	-- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc
00:11:09.021   23:44:39	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:11:09.021   23:44:39	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=108533
00:11:09.021   23:44:39	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:11:09.021   23:44:39	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:09.021   23:44:39	-- bdev/blockdev.sh@47 -- # waitforlisten 108533
00:11:09.021   23:44:39	-- common/autotest_common.sh@829 -- # '[' -z 108533 ']'
00:11:09.021   23:44:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:09.021   23:44:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:09.021   23:44:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:09.021  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:09.021   23:44:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:09.021   23:44:39	-- common/autotest_common.sh@10 -- # set +x
00:11:09.021  [2024-12-13 23:44:39.665876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:09.021  [2024-12-13 23:44:39.666070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108533 ]
00:11:09.282  [2024-12-13 23:44:39.821934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:09.282  [2024-12-13 23:44:40.013534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:09.282  [2024-12-13 23:44:40.013820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:10.218   23:44:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:10.218   23:44:40	-- common/autotest_common.sh@862 -- # return 0
00:11:10.218   23:44:40	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:11:10.218   23:44:40	-- bdev/blockdev.sh@694 -- # setup_bdev_conf
00:11:10.218   23:44:40	-- bdev/blockdev.sh@51 -- # rpc_cmd
00:11:10.218   23:44:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.218   23:44:40	-- common/autotest_common.sh@10 -- # set +x
00:11:10.784  [2024-12-13 23:44:41.377195] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:10.784  [2024-12-13 23:44:41.377305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:10.784  
00:11:10.784  [2024-12-13 23:44:41.385168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:10.784  [2024-12-13 23:44:41.386549] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:10.784  
00:11:10.784  Malloc0
00:11:10.784  Malloc1
00:11:10.784  Malloc2
00:11:11.043  Malloc3
00:11:11.043  Malloc4
00:11:11.043  Malloc5
00:11:11.043  Malloc6
00:11:11.043  Malloc7
00:11:11.043  Malloc8
00:11:11.302  Malloc9
00:11:11.302  [2024-12-13 23:44:41.798579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:11.302  [2024-12-13 23:44:41.798828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:11.302  [2024-12-13 23:44:41.798915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:11:11.302  [2024-12-13 23:44:41.799188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:11.302  [2024-12-13 23:44:41.801658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:11.302  [2024-12-13 23:44:41.801886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:11.302  TestPT
00:11:11.302   23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302   23:44:41	-- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:11:11.302  5000+0 records in
00:11:11.302  5000+0 records out
00:11:11.302  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0292031 s, 351 MB/s
00:11:11.302   23:44:41	-- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:11:11.302   23:44:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302   23:44:41	-- common/autotest_common.sh@10 -- # set +x
00:11:11.302  AIO0
00:11:11.302   23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302   23:44:41	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:11:11.302   23:44:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302   23:44:41	-- common/autotest_common.sh@10 -- # set +x
00:11:11.302   23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302   23:44:41	-- bdev/blockdev.sh@738 -- # cat
00:11:11.302    23:44:41	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:11:11.302    23:44:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302    23:44:41	-- common/autotest_common.sh@10 -- # set +x
00:11:11.302    23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302    23:44:41	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:11:11.302    23:44:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302    23:44:41	-- common/autotest_common.sh@10 -- # set +x
00:11:11.302    23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302    23:44:41	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:11:11.302    23:44:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302    23:44:41	-- common/autotest_common.sh@10 -- # set +x
00:11:11.302    23:44:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.302   23:44:42	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:11:11.302    23:44:42	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:11:11.302    23:44:42	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:11:11.302    23:44:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.302    23:44:42	-- common/autotest_common.sh@10 -- # set +x
00:11:11.562    23:44:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.562   23:44:42	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:11:11.562    23:44:42	-- bdev/blockdev.sh@747 -- # jq -r .name
00:11:11.563    23:44:42	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2103d49a-c133-48df-9927-a26439e83cf7"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2103d49a-c133-48df-9927-a26439e83cf7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "43bdcf1a-4b28-527e-96aa-622f4390a1b6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "43bdcf1a-4b28-527e-96aa-622f4390a1b6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "7849d386-9c33-5b81-a599-2364dc3380e5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "7849d386-9c33-5b81-a599-2364dc3380e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "8553e949-4664-5ea1-8ecb-e1159d0504e3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8553e949-4664-5ea1-8ecb-e1159d0504e3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "b99887f7-b516-5155-9793-7447850cdb38"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "b99887f7-b516-5155-9793-7447850cdb38",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "57594838-a9cc-57e1-a7e1-75bd643949ad"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "57594838-a9cc-57e1-a7e1-75bd643949ad",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "7cd84e69-a19b-5acb-a3b8-750ffc377d72"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7cd84e69-a19b-5acb-a3b8-750ffc377d72",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "cb3ef9e4-6030-5b89-98da-b5df34f46bcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "cb3ef9e4-6030-5b89-98da-b5df34f46bcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "8e01874d-39b3-5bcc-8ac8-f47480eb02a3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8e01874d-39b3-5bcc-8ac8-f47480eb02a3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "70af9b17-3329-5a87-b871-ef5374ad727e"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "70af9b17-3329-5a87-b871-ef5374ad727e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "078b9ae9-25bc-4c50-a6e9-aef372f163a4"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "056cf3ec-d5f8-4d5e-8868-f89dd67f975b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "ef725040-b135-450c-8024-8ef9f3b90402",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "32837d76-b713-4479-b4a9-d266f7bf9ac3"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "62dd1351-0b53-48f6-9f45-dc1ba97385c0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2ff40c4f-6ba5-441c-815f-a0963093863d",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "5445c4ec-6114-4ead-88ba-65efabc64926"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "ec23fdf9-72e1-470f-bff4-b636d83bbf36",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "7a789f28-1cef-4d89-9beb-03af6c45a26c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "caac536e-e6a9-4d5e-b3f9-5637de4956ec"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "caac536e-e6a9-4d5e-b3f9-5637de4956ec",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:11:11.563   23:44:42	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:11:11.563   23:44:42	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0
00:11:11.563   23:44:42	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:11:11.563   23:44:42	-- bdev/blockdev.sh@752 -- # killprocess 108533
00:11:11.563   23:44:42	-- common/autotest_common.sh@936 -- # '[' -z 108533 ']'
00:11:11.563   23:44:42	-- common/autotest_common.sh@940 -- # kill -0 108533
00:11:11.563    23:44:42	-- common/autotest_common.sh@941 -- # uname
00:11:11.563   23:44:42	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:11.563    23:44:42	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108533
00:11:11.563   23:44:42	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:11.563   23:44:42	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:11.563   23:44:42	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 108533'
00:11:11.563  killing process with pid 108533
00:11:11.563   23:44:42	-- common/autotest_common.sh@955 -- # kill 108533
00:11:11.563   23:44:42	-- common/autotest_common.sh@960 -- # wait 108533
00:11:14.096   23:44:44	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:11:14.096   23:44:44	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:14.096   23:44:44	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:11:14.096   23:44:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:14.096   23:44:44	-- common/autotest_common.sh@10 -- # set +x
00:11:14.354  ************************************
00:11:14.354  START TEST bdev_hello_world
00:11:14.354  ************************************
00:11:14.354   23:44:44	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:14.354  [2024-12-13 23:44:44.901410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:14.355  [2024-12-13 23:44:44.901803] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108625 ]
00:11:14.355  [2024-12-13 23:44:45.057225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:14.616  [2024-12-13 23:44:45.245685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:14.874  [2024-12-13 23:44:45.603627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:14.874  [2024-12-13 23:44:45.604051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:15.132  [2024-12-13 23:44:45.611560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:15.132  [2024-12-13 23:44:45.611761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:15.132  [2024-12-13 23:44:45.619582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:15.132  [2024-12-13 23:44:45.619762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:15.132  [2024-12-13 23:44:45.619896] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:15.132  [2024-12-13 23:44:45.808037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:15.132  [2024-12-13 23:44:45.808458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:15.132  [2024-12-13 23:44:45.808559] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:11:15.132  [2024-12-13 23:44:45.808785] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:15.132  [2024-12-13 23:44:45.811477] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:15.132  [2024-12-13 23:44:45.811666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:15.391  [2024-12-13 23:44:46.115544] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:11:15.391  [2024-12-13 23:44:46.116051] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:11:15.391  [2024-12-13 23:44:46.116413] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:11:15.391  [2024-12-13 23:44:46.116609] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:11:15.391  [2024-12-13 23:44:46.116956] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:11:15.391  [2024-12-13 23:44:46.117095] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:11:15.391  [2024-12-13 23:44:46.117303] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:11:15.391  
00:11:15.391  [2024-12-13 23:44:46.117452] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:11:17.293  ************************************
00:11:17.293  END TEST bdev_hello_world
00:11:17.293  ************************************
00:11:17.293  
00:11:17.293  real	0m3.016s
00:11:17.293  user	0m2.428s
00:11:17.293  sys	0m0.437s
00:11:17.293   23:44:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:17.293   23:44:47	-- common/autotest_common.sh@10 -- # set +x
00:11:17.293   23:44:47	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:11:17.293   23:44:47	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:17.293   23:44:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:17.293   23:44:47	-- common/autotest_common.sh@10 -- # set +x
00:11:17.293  ************************************
00:11:17.293  START TEST bdev_bounds
00:11:17.293  ************************************
00:11:17.293  Process bdevio pid: 108687
00:11:17.293  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:17.293   23:44:47	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:11:17.294   23:44:47	-- bdev/blockdev.sh@288 -- # bdevio_pid=108687
00:11:17.294   23:44:47	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:11:17.294   23:44:47	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 108687'
00:11:17.294   23:44:47	-- bdev/blockdev.sh@291 -- # waitforlisten 108687
00:11:17.294   23:44:47	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:17.294   23:44:47	-- common/autotest_common.sh@829 -- # '[' -z 108687 ']'
00:11:17.294   23:44:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:17.294   23:44:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:17.294   23:44:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:17.294   23:44:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:17.294   23:44:47	-- common/autotest_common.sh@10 -- # set +x
00:11:17.294  [2024-12-13 23:44:47.987698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:17.294  [2024-12-13 23:44:47.988128] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108687 ]
00:11:17.552  [2024-12-13 23:44:48.166548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:17.810  [2024-12-13 23:44:48.357338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:17.810  [2024-12-13 23:44:48.357476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:17.810  [2024-12-13 23:44:48.357479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:18.069  [2024-12-13 23:44:48.736389] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:18.069  [2024-12-13 23:44:48.736853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:18.069  [2024-12-13 23:44:48.744353] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:18.069  [2024-12-13 23:44:48.744619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:18.069  [2024-12-13 23:44:48.752384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:18.069  [2024-12-13 23:44:48.752637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:18.069  [2024-12-13 23:44:48.752781] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:18.328  [2024-12-13 23:44:48.953840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:18.328  [2024-12-13 23:44:48.954377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:18.328  [2024-12-13 23:44:48.954569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:11:18.328  [2024-12-13 23:44:48.954741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:18.328  [2024-12-13 23:44:48.957402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:18.328  [2024-12-13 23:44:48.957630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:18.894   23:44:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:18.895   23:44:49	-- common/autotest_common.sh@862 -- # return 0
00:11:18.895   23:44:49	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:11:19.153  I/O targets:
00:11:19.153    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:11:19.153    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:11:19.153    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:11:19.153    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:11:19.154    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:11:19.154    raid0: 131072 blocks of 512 bytes (64 MiB)
00:11:19.154    concat0: 131072 blocks of 512 bytes (64 MiB)
00:11:19.154    raid1: 65536 blocks of 512 bytes (32 MiB)
00:11:19.154    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:11:19.154  
00:11:19.154  
00:11:19.154       CUnit - A unit testing framework for C - Version 2.1-3
00:11:19.154       http://cunit.sourceforge.net/
00:11:19.154  
00:11:19.154  
00:11:19.154  Suite: bdevio tests on: AIO0
00:11:19.154    Test: blockdev write read block ...passed
00:11:19.154    Test: blockdev write zeroes read block ...passed
00:11:19.154    Test: blockdev write zeroes read no split ...passed
00:11:19.154    Test: blockdev write zeroes read split ...passed
00:11:19.154    Test: blockdev write zeroes read split partial ...passed
00:11:19.154    Test: blockdev reset ...passed
00:11:19.154    Test: blockdev write read 8 blocks ...passed
00:11:19.154    Test: blockdev write read size > 128k ...passed
00:11:19.154    Test: blockdev write read invalid size ...passed
00:11:19.154    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.154    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.154    Test: blockdev write read max offset ...passed
00:11:19.154    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.154    Test: blockdev writev readv 8 blocks ...passed
00:11:19.154    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.154    Test: blockdev writev readv block ...passed
00:11:19.154    Test: blockdev writev readv size > 128k ...passed
00:11:19.154    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.154    Test: blockdev comparev and writev ...passed
00:11:19.154    Test: blockdev nvme passthru rw ...passed
00:11:19.154    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.154    Test: blockdev nvme admin passthru ...passed
00:11:19.154    Test: blockdev copy ...passed
00:11:19.154  Suite: bdevio tests on: raid1
00:11:19.154    Test: blockdev write read block ...passed
00:11:19.154    Test: blockdev write zeroes read block ...passed
00:11:19.154    Test: blockdev write zeroes read no split ...passed
00:11:19.154    Test: blockdev write zeroes read split ...passed
00:11:19.154    Test: blockdev write zeroes read split partial ...passed
00:11:19.154    Test: blockdev reset ...passed
00:11:19.154    Test: blockdev write read 8 blocks ...passed
00:11:19.154    Test: blockdev write read size > 128k ...passed
00:11:19.154    Test: blockdev write read invalid size ...passed
00:11:19.154    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.154    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.154    Test: blockdev write read max offset ...passed
00:11:19.154    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.154    Test: blockdev writev readv 8 blocks ...passed
00:11:19.154    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.154    Test: blockdev writev readv block ...passed
00:11:19.154    Test: blockdev writev readv size > 128k ...passed
00:11:19.154    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.154    Test: blockdev comparev and writev ...passed
00:11:19.154    Test: blockdev nvme passthru rw ...passed
00:11:19.154    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.154    Test: blockdev nvme admin passthru ...passed
00:11:19.154    Test: blockdev copy ...passed
00:11:19.154  Suite: bdevio tests on: concat0
00:11:19.154    Test: blockdev write read block ...passed
00:11:19.154    Test: blockdev write zeroes read block ...passed
00:11:19.154    Test: blockdev write zeroes read no split ...passed
00:11:19.154    Test: blockdev write zeroes read split ...passed
00:11:19.413    Test: blockdev write zeroes read split partial ...passed
00:11:19.413    Test: blockdev reset ...passed
00:11:19.413    Test: blockdev write read 8 blocks ...passed
00:11:19.413    Test: blockdev write read size > 128k ...passed
00:11:19.413    Test: blockdev write read invalid size ...passed
00:11:19.413    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.413    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.413    Test: blockdev write read max offset ...passed
00:11:19.413    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.413    Test: blockdev writev readv 8 blocks ...passed
00:11:19.413    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.413    Test: blockdev writev readv block ...passed
00:11:19.413    Test: blockdev writev readv size > 128k ...passed
00:11:19.413    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.413    Test: blockdev comparev and writev ...passed
00:11:19.413    Test: blockdev nvme passthru rw ...passed
00:11:19.413    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.413    Test: blockdev nvme admin passthru ...passed
00:11:19.413    Test: blockdev copy ...passed
00:11:19.413  Suite: bdevio tests on: raid0
00:11:19.413    Test: blockdev write read block ...passed
00:11:19.413    Test: blockdev write zeroes read block ...passed
00:11:19.413    Test: blockdev write zeroes read no split ...passed
00:11:19.413    Test: blockdev write zeroes read split ...passed
00:11:19.413    Test: blockdev write zeroes read split partial ...passed
00:11:19.413    Test: blockdev reset ...passed
00:11:19.413    Test: blockdev write read 8 blocks ...passed
00:11:19.413    Test: blockdev write read size > 128k ...passed
00:11:19.413    Test: blockdev write read invalid size ...passed
00:11:19.413    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.413    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.413    Test: blockdev write read max offset ...passed
00:11:19.413    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.413    Test: blockdev writev readv 8 blocks ...passed
00:11:19.413    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.413    Test: blockdev writev readv block ...passed
00:11:19.413    Test: blockdev writev readv size > 128k ...passed
00:11:19.413    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.413    Test: blockdev comparev and writev ...passed
00:11:19.413    Test: blockdev nvme passthru rw ...passed
00:11:19.413    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.413    Test: blockdev nvme admin passthru ...passed
00:11:19.413    Test: blockdev copy ...passed
00:11:19.413  Suite: bdevio tests on: TestPT
00:11:19.413    Test: blockdev write read block ...passed
00:11:19.413    Test: blockdev write zeroes read block ...passed
00:11:19.413    Test: blockdev write zeroes read no split ...passed
00:11:19.413    Test: blockdev write zeroes read split ...passed
00:11:19.413    Test: blockdev write zeroes read split partial ...passed
00:11:19.413    Test: blockdev reset ...passed
00:11:19.413    Test: blockdev write read 8 blocks ...passed
00:11:19.413    Test: blockdev write read size > 128k ...passed
00:11:19.413    Test: blockdev write read invalid size ...passed
00:11:19.413    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.413    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.413    Test: blockdev write read max offset ...passed
00:11:19.413    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.413    Test: blockdev writev readv 8 blocks ...passed
00:11:19.413    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.413    Test: blockdev writev readv block ...passed
00:11:19.413    Test: blockdev writev readv size > 128k ...passed
00:11:19.413    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.413    Test: blockdev comparev and writev ...passed
00:11:19.413    Test: blockdev nvme passthru rw ...passed
00:11:19.413    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.413    Test: blockdev nvme admin passthru ...passed
00:11:19.413    Test: blockdev copy ...passed
00:11:19.413  Suite: bdevio tests on: Malloc2p7
00:11:19.413    Test: blockdev write read block ...passed
00:11:19.413    Test: blockdev write zeroes read block ...passed
00:11:19.413    Test: blockdev write zeroes read no split ...passed
00:11:19.413    Test: blockdev write zeroes read split ...passed
00:11:19.413    Test: blockdev write zeroes read split partial ...passed
00:11:19.413    Test: blockdev reset ...passed
00:11:19.413    Test: blockdev write read 8 blocks ...passed
00:11:19.413    Test: blockdev write read size > 128k ...passed
00:11:19.413    Test: blockdev write read invalid size ...passed
00:11:19.413    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.413    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.414    Test: blockdev write read max offset ...passed
00:11:19.414    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.414    Test: blockdev writev readv 8 blocks ...passed
00:11:19.414    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.414    Test: blockdev writev readv block ...passed
00:11:19.414    Test: blockdev writev readv size > 128k ...passed
00:11:19.414    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.414    Test: blockdev comparev and writev ...passed
00:11:19.414    Test: blockdev nvme passthru rw ...passed
00:11:19.414    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.414    Test: blockdev nvme admin passthru ...passed
00:11:19.414    Test: blockdev copy ...passed
00:11:19.414  Suite: bdevio tests on: Malloc2p6
00:11:19.414    Test: blockdev write read block ...passed
00:11:19.414    Test: blockdev write zeroes read block ...passed
00:11:19.414    Test: blockdev write zeroes read no split ...passed
00:11:19.414    Test: blockdev write zeroes read split ...passed
00:11:19.414    Test: blockdev write zeroes read split partial ...passed
00:11:19.414    Test: blockdev reset ...passed
00:11:19.414    Test: blockdev write read 8 blocks ...passed
00:11:19.414    Test: blockdev write read size > 128k ...passed
00:11:19.414    Test: blockdev write read invalid size ...passed
00:11:19.414    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.414    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.414    Test: blockdev write read max offset ...passed
00:11:19.414    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.414    Test: blockdev writev readv 8 blocks ...passed
00:11:19.414    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.414    Test: blockdev writev readv block ...passed
00:11:19.414    Test: blockdev writev readv size > 128k ...passed
00:11:19.414    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.414    Test: blockdev comparev and writev ...passed
00:11:19.414    Test: blockdev nvme passthru rw ...passed
00:11:19.414    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.414    Test: blockdev nvme admin passthru ...passed
00:11:19.414    Test: blockdev copy ...passed
00:11:19.414  Suite: bdevio tests on: Malloc2p5
00:11:19.414    Test: blockdev write read block ...passed
00:11:19.414    Test: blockdev write zeroes read block ...passed
00:11:19.414    Test: blockdev write zeroes read no split ...passed
00:11:19.673    Test: blockdev write zeroes read split ...passed
00:11:19.673    Test: blockdev write zeroes read split partial ...passed
00:11:19.673    Test: blockdev reset ...passed
00:11:19.673    Test: blockdev write read 8 blocks ...passed
00:11:19.673    Test: blockdev write read size > 128k ...passed
00:11:19.673    Test: blockdev write read invalid size ...passed
00:11:19.673    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.673    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.673    Test: blockdev write read max offset ...passed
00:11:19.673    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.673    Test: blockdev writev readv 8 blocks ...passed
00:11:19.673    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.673    Test: blockdev writev readv block ...passed
00:11:19.673    Test: blockdev writev readv size > 128k ...passed
00:11:19.673    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.673    Test: blockdev comparev and writev ...passed
00:11:19.673    Test: blockdev nvme passthru rw ...passed
00:11:19.673    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.673    Test: blockdev nvme admin passthru ...passed
00:11:19.673    Test: blockdev copy ...passed
00:11:19.673  Suite: bdevio tests on: Malloc2p4
00:11:19.673    Test: blockdev write read block ...passed
00:11:19.673    Test: blockdev write zeroes read block ...passed
00:11:19.673    Test: blockdev write zeroes read no split ...passed
00:11:19.673    Test: blockdev write zeroes read split ...passed
00:11:19.673    Test: blockdev write zeroes read split partial ...passed
00:11:19.673    Test: blockdev reset ...passed
00:11:19.673    Test: blockdev write read 8 blocks ...passed
00:11:19.673    Test: blockdev write read size > 128k ...passed
00:11:19.673    Test: blockdev write read invalid size ...passed
00:11:19.673    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.673    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.673    Test: blockdev write read max offset ...passed
00:11:19.673    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.673    Test: blockdev writev readv 8 blocks ...passed
00:11:19.673    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.673    Test: blockdev writev readv block ...passed
00:11:19.673    Test: blockdev writev readv size > 128k ...passed
00:11:19.673    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.673    Test: blockdev comparev and writev ...passed
00:11:19.673    Test: blockdev nvme passthru rw ...passed
00:11:19.673    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.673    Test: blockdev nvme admin passthru ...passed
00:11:19.673    Test: blockdev copy ...passed
00:11:19.673  Suite: bdevio tests on: Malloc2p3
00:11:19.673    Test: blockdev write read block ...passed
00:11:19.673    Test: blockdev write zeroes read block ...passed
00:11:19.673    Test: blockdev write zeroes read no split ...passed
00:11:19.673    Test: blockdev write zeroes read split ...passed
00:11:19.673    Test: blockdev write zeroes read split partial ...passed
00:11:19.673    Test: blockdev reset ...passed
00:11:19.673    Test: blockdev write read 8 blocks ...passed
00:11:19.673    Test: blockdev write read size > 128k ...passed
00:11:19.673    Test: blockdev write read invalid size ...passed
00:11:19.673    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.673    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.673    Test: blockdev write read max offset ...passed
00:11:19.673    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.673    Test: blockdev writev readv 8 blocks ...passed
00:11:19.673    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.673    Test: blockdev writev readv block ...passed
00:11:19.673    Test: blockdev writev readv size > 128k ...passed
00:11:19.673    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.673    Test: blockdev comparev and writev ...passed
00:11:19.673    Test: blockdev nvme passthru rw ...passed
00:11:19.673    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.673    Test: blockdev nvme admin passthru ...passed
00:11:19.673    Test: blockdev copy ...passed
00:11:19.673  Suite: bdevio tests on: Malloc2p2
00:11:19.673    Test: blockdev write read block ...passed
00:11:19.673    Test: blockdev write zeroes read block ...passed
00:11:19.673    Test: blockdev write zeroes read no split ...passed
00:11:19.673    Test: blockdev write zeroes read split ...passed
00:11:19.673    Test: blockdev write zeroes read split partial ...passed
00:11:19.673    Test: blockdev reset ...passed
00:11:19.673    Test: blockdev write read 8 blocks ...passed
00:11:19.673    Test: blockdev write read size > 128k ...passed
00:11:19.673    Test: blockdev write read invalid size ...passed
00:11:19.673    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.673    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.673    Test: blockdev write read max offset ...passed
00:11:19.673    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.673    Test: blockdev writev readv 8 blocks ...passed
00:11:19.673    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.673    Test: blockdev writev readv block ...passed
00:11:19.673    Test: blockdev writev readv size > 128k ...passed
00:11:19.673    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.673    Test: blockdev comparev and writev ...passed
00:11:19.673    Test: blockdev nvme passthru rw ...passed
00:11:19.673    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.673    Test: blockdev nvme admin passthru ...passed
00:11:19.673    Test: blockdev copy ...passed
00:11:19.673  Suite: bdevio tests on: Malloc2p1
00:11:19.673    Test: blockdev write read block ...passed
00:11:19.673    Test: blockdev write zeroes read block ...passed
00:11:19.673    Test: blockdev write zeroes read no split ...passed
00:11:19.673    Test: blockdev write zeroes read split ...passed
00:11:19.673    Test: blockdev write zeroes read split partial ...passed
00:11:19.673    Test: blockdev reset ...passed
00:11:19.673    Test: blockdev write read 8 blocks ...passed
00:11:19.673    Test: blockdev write read size > 128k ...passed
00:11:19.673    Test: blockdev write read invalid size ...passed
00:11:19.673    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.673    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.673    Test: blockdev write read max offset ...passed
00:11:19.673    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.673    Test: blockdev writev readv 8 blocks ...passed
00:11:19.673    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.673    Test: blockdev writev readv block ...passed
00:11:19.673    Test: blockdev writev readv size > 128k ...passed
00:11:19.674    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.674    Test: blockdev comparev and writev ...passed
00:11:19.674    Test: blockdev nvme passthru rw ...passed
00:11:19.674    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.674    Test: blockdev nvme admin passthru ...passed
00:11:19.674    Test: blockdev copy ...passed
00:11:19.674  Suite: bdevio tests on: Malloc2p0
00:11:19.674    Test: blockdev write read block ...passed
00:11:19.674    Test: blockdev write zeroes read block ...passed
00:11:19.674    Test: blockdev write zeroes read no split ...passed
00:11:19.674    Test: blockdev write zeroes read split ...passed
00:11:19.933    Test: blockdev write zeroes read split partial ...passed
00:11:19.933    Test: blockdev reset ...passed
00:11:19.933    Test: blockdev write read 8 blocks ...passed
00:11:19.933    Test: blockdev write read size > 128k ...passed
00:11:19.933    Test: blockdev write read invalid size ...passed
00:11:19.933    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.933    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.933    Test: blockdev write read max offset ...passed
00:11:19.933    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.933    Test: blockdev writev readv 8 blocks ...passed
00:11:19.933    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.933    Test: blockdev writev readv block ...passed
00:11:19.933    Test: blockdev writev readv size > 128k ...passed
00:11:19.933    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.933    Test: blockdev comparev and writev ...passed
00:11:19.933    Test: blockdev nvme passthru rw ...passed
00:11:19.933    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.933    Test: blockdev nvme admin passthru ...passed
00:11:19.933    Test: blockdev copy ...passed
00:11:19.933  Suite: bdevio tests on: Malloc1p1
00:11:19.933    Test: blockdev write read block ...passed
00:11:19.933    Test: blockdev write zeroes read block ...passed
00:11:19.933    Test: blockdev write zeroes read no split ...passed
00:11:19.933    Test: blockdev write zeroes read split ...passed
00:11:19.933    Test: blockdev write zeroes read split partial ...passed
00:11:19.933    Test: blockdev reset ...passed
00:11:19.933    Test: blockdev write read 8 blocks ...passed
00:11:19.933    Test: blockdev write read size > 128k ...passed
00:11:19.933    Test: blockdev write read invalid size ...passed
00:11:19.933    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.933    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.933    Test: blockdev write read max offset ...passed
00:11:19.933    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.933    Test: blockdev writev readv 8 blocks ...passed
00:11:19.933    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.933    Test: blockdev writev readv block ...passed
00:11:19.933    Test: blockdev writev readv size > 128k ...passed
00:11:19.933    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.933    Test: blockdev comparev and writev ...passed
00:11:19.933    Test: blockdev nvme passthru rw ...passed
00:11:19.933    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.933    Test: blockdev nvme admin passthru ...passed
00:11:19.933    Test: blockdev copy ...passed
00:11:19.933  Suite: bdevio tests on: Malloc1p0
00:11:19.933    Test: blockdev write read block ...passed
00:11:19.933    Test: blockdev write zeroes read block ...passed
00:11:19.933    Test: blockdev write zeroes read no split ...passed
00:11:19.933    Test: blockdev write zeroes read split ...passed
00:11:19.933    Test: blockdev write zeroes read split partial ...passed
00:11:19.933    Test: blockdev reset ...passed
00:11:19.933    Test: blockdev write read 8 blocks ...passed
00:11:19.933    Test: blockdev write read size > 128k ...passed
00:11:19.933    Test: blockdev write read invalid size ...passed
00:11:19.933    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.933    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.933    Test: blockdev write read max offset ...passed
00:11:19.933    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.933    Test: blockdev writev readv 8 blocks ...passed
00:11:19.933    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.933    Test: blockdev writev readv block ...passed
00:11:19.933    Test: blockdev writev readv size > 128k ...passed
00:11:19.933    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.933    Test: blockdev comparev and writev ...passed
00:11:19.933    Test: blockdev nvme passthru rw ...passed
00:11:19.933    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.933    Test: blockdev nvme admin passthru ...passed
00:11:19.933    Test: blockdev copy ...passed
00:11:19.933  Suite: bdevio tests on: Malloc0
00:11:19.933    Test: blockdev write read block ...passed
00:11:19.933    Test: blockdev write zeroes read block ...passed
00:11:19.933    Test: blockdev write zeroes read no split ...passed
00:11:19.933    Test: blockdev write zeroes read split ...passed
00:11:19.933    Test: blockdev write zeroes read split partial ...passed
00:11:19.933    Test: blockdev reset ...passed
00:11:19.933    Test: blockdev write read 8 blocks ...passed
00:11:19.933    Test: blockdev write read size > 128k ...passed
00:11:19.933    Test: blockdev write read invalid size ...passed
00:11:19.933    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:19.933    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:19.933    Test: blockdev write read max offset ...passed
00:11:19.933    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:19.933    Test: blockdev writev readv 8 blocks ...passed
00:11:19.933    Test: blockdev writev readv 30 x 1block ...passed
00:11:19.933    Test: blockdev writev readv block ...passed
00:11:19.933    Test: blockdev writev readv size > 128k ...passed
00:11:19.933    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:19.933    Test: blockdev comparev and writev ...passed
00:11:19.933    Test: blockdev nvme passthru rw ...passed
00:11:19.933    Test: blockdev nvme passthru vendor specific ...passed
00:11:19.933    Test: blockdev nvme admin passthru ...passed
00:11:19.933    Test: blockdev copy ...passed
00:11:19.933  
00:11:19.933  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:19.933                suites     16     16    n/a      0        0
00:11:19.933                 tests    368    368    368      0        0
00:11:19.933               asserts   2224   2224   2224      0      n/a
00:11:19.933  
00:11:19.933  Elapsed time =    2.324 seconds
00:11:19.933  0
00:11:19.933   23:44:50	-- bdev/blockdev.sh@293 -- # killprocess 108687
00:11:19.933   23:44:50	-- common/autotest_common.sh@936 -- # '[' -z 108687 ']'
00:11:19.933   23:44:50	-- common/autotest_common.sh@940 -- # kill -0 108687
00:11:19.933    23:44:50	-- common/autotest_common.sh@941 -- # uname
00:11:19.933   23:44:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:19.933    23:44:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108687
00:11:19.933  killing process with pid 108687
00:11:19.933   23:44:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:19.933   23:44:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:19.933   23:44:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 108687'
00:11:19.933   23:44:50	-- common/autotest_common.sh@955 -- # kill 108687
00:11:19.933   23:44:50	-- common/autotest_common.sh@960 -- # wait 108687
00:11:21.836  ************************************
00:11:21.836  END TEST bdev_bounds
00:11:21.836  ************************************
00:11:21.836   23:44:52	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:11:21.836  
00:11:21.836  real	0m4.354s
00:11:21.836  user	0m11.200s
00:11:21.836  sys	0m0.600s
00:11:21.836   23:44:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:21.836   23:44:52	-- common/autotest_common.sh@10 -- # set +x
00:11:21.836   23:44:52	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:21.836   23:44:52	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:11:21.836   23:44:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:21.836   23:44:52	-- common/autotest_common.sh@10 -- # set +x
00:11:21.836  ************************************
00:11:21.836  START TEST bdev_nbd
00:11:21.836  ************************************
00:11:21.836   23:44:52	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:21.836    23:44:52	-- bdev/blockdev.sh@298 -- # uname -s
00:11:21.836   23:44:52	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:11:21.836   23:44:52	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:21.836   23:44:52	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:21.836   23:44:52	-- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:21.836   23:44:52	-- bdev/blockdev.sh@302 -- # local bdev_all
00:11:21.836   23:44:52	-- bdev/blockdev.sh@303 -- # local bdev_num=16
00:11:21.836   23:44:52	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:11:21.836   23:44:52	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:21.836   23:44:52	-- bdev/blockdev.sh@309 -- # local nbd_all
00:11:21.836   23:44:52	-- bdev/blockdev.sh@310 -- # bdev_num=16
00:11:21.836   23:44:52	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:21.837   23:44:52	-- bdev/blockdev.sh@312 -- # local nbd_list
00:11:21.837   23:44:52	-- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:21.837   23:44:52	-- bdev/blockdev.sh@313 -- # local bdev_list
00:11:21.837   23:44:52	-- bdev/blockdev.sh@316 -- # nbd_pid=108776
00:11:21.837   23:44:52	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:11:21.837   23:44:52	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:21.837   23:44:52	-- bdev/blockdev.sh@318 -- # waitforlisten 108776 /var/tmp/spdk-nbd.sock
00:11:21.837   23:44:52	-- common/autotest_common.sh@829 -- # '[' -z 108776 ']'
00:11:21.837   23:44:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:11:21.837   23:44:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:21.837   23:44:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:11:21.837  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:11:21.837   23:44:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:21.837   23:44:52	-- common/autotest_common.sh@10 -- # set +x
00:11:21.837  [2024-12-13 23:44:52.392877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:21.837  [2024-12-13 23:44:52.393294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:21.837  [2024-12-13 23:44:52.548033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:22.095  [2024-12-13 23:44:52.738809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:22.663  [2024-12-13 23:44:53.120146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:22.663  [2024-12-13 23:44:53.120557] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:22.663  [2024-12-13 23:44:53.128086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:22.663  [2024-12-13 23:44:53.128341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:22.663  [2024-12-13 23:44:53.136097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:22.663  [2024-12-13 23:44:53.136327] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:22.663  [2024-12-13 23:44:53.136474] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:22.663  [2024-12-13 23:44:53.324650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:22.663  [2024-12-13 23:44:53.325092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:22.663  [2024-12-13 23:44:53.325198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:11:22.663  [2024-12-13 23:44:53.325413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:22.663  [2024-12-13 23:44:53.327969] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:22.663  [2024-12-13 23:44:53.328175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:23.598   23:44:54	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:23.598   23:44:54	-- common/autotest_common.sh@862 -- # return 0
00:11:23.598   23:44:54	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@24 -- # local i
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:23.598    23:44:54	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:11:23.598    23:44:54	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:11:23.598   23:44:54	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:23.598   23:44:54	-- common/autotest_common.sh@867 -- # local i
00:11:23.598   23:44:54	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:23.598   23:44:54	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:23.598   23:44:54	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:23.598   23:44:54	-- common/autotest_common.sh@871 -- # break
00:11:23.598   23:44:54	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:23.598   23:44:54	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:23.598   23:44:54	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:23.598  1+0 records in
00:11:23.598  1+0 records out
00:11:23.598  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598178 s, 6.8 MB/s
00:11:23.598    23:44:54	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:23.598   23:44:54	-- common/autotest_common.sh@884 -- # size=4096
00:11:23.598   23:44:54	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:23.598   23:44:54	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:23.598   23:44:54	-- common/autotest_common.sh@887 -- # return 0
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:23.598   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:23.598    23:44:54	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0
00:11:23.857   23:44:54	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:11:23.857    23:44:54	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:11:23.857   23:44:54	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:11:23.857   23:44:54	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:23.857   23:44:54	-- common/autotest_common.sh@867 -- # local i
00:11:23.857   23:44:54	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:23.857   23:44:54	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:23.857   23:44:54	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:23.857   23:44:54	-- common/autotest_common.sh@871 -- # break
00:11:23.857   23:44:54	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:23.857   23:44:54	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:23.857   23:44:54	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:23.857  1+0 records in
00:11:23.857  1+0 records out
00:11:23.857  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503287 s, 8.1 MB/s
00:11:23.857    23:44:54	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:23.857   23:44:54	-- common/autotest_common.sh@884 -- # size=4096
00:11:23.857   23:44:54	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:23.857   23:44:54	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:23.857   23:44:54	-- common/autotest_common.sh@887 -- # return 0
00:11:23.857   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:23.857   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:23.857    23:44:54	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1
00:11:24.115   23:44:54	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:11:24.115    23:44:54	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:11:24.115   23:44:54	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:11:24.115   23:44:54	-- common/autotest_common.sh@866 -- # local nbd_name=nbd2
00:11:24.115   23:44:54	-- common/autotest_common.sh@867 -- # local i
00:11:24.115   23:44:54	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:24.115   23:44:54	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:24.115   23:44:54	-- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions
00:11:24.115   23:44:54	-- common/autotest_common.sh@871 -- # break
00:11:24.115   23:44:54	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:24.115   23:44:54	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:24.115   23:44:54	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:24.115  1+0 records in
00:11:24.115  1+0 records out
00:11:24.115  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695321 s, 5.9 MB/s
00:11:24.115    23:44:54	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.115   23:44:54	-- common/autotest_common.sh@884 -- # size=4096
00:11:24.115   23:44:54	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.115   23:44:54	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:24.115   23:44:54	-- common/autotest_common.sh@887 -- # return 0
00:11:24.115   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:24.115   23:44:54	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:24.374    23:44:54	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0
00:11:24.374   23:44:55	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:11:24.374    23:44:55	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:11:24.374   23:44:55	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:11:24.374   23:44:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd3
00:11:24.374   23:44:55	-- common/autotest_common.sh@867 -- # local i
00:11:24.374   23:44:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:24.374   23:44:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:24.374   23:44:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions
00:11:24.374   23:44:55	-- common/autotest_common.sh@871 -- # break
00:11:24.374   23:44:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:24.374   23:44:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:24.374   23:44:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:24.374  1+0 records in
00:11:24.374  1+0 records out
00:11:24.374  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452744 s, 9.0 MB/s
00:11:24.374    23:44:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.374   23:44:55	-- common/autotest_common.sh@884 -- # size=4096
00:11:24.374   23:44:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.374   23:44:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:24.374   23:44:55	-- common/autotest_common.sh@887 -- # return 0
00:11:24.374   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:24.374   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:24.374    23:44:55	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1
00:11:24.949   23:44:55	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:11:24.949    23:44:55	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:11:24.949   23:44:55	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:11:24.949   23:44:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd4
00:11:24.949   23:44:55	-- common/autotest_common.sh@867 -- # local i
00:11:24.949   23:44:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:24.949   23:44:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:24.949   23:44:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions
00:11:24.949   23:44:55	-- common/autotest_common.sh@871 -- # break
00:11:24.949   23:44:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:24.949   23:44:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:24.949   23:44:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:24.949  1+0 records in
00:11:24.949  1+0 records out
00:11:24.949  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105698 s, 3.9 MB/s
00:11:24.949    23:44:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.949   23:44:55	-- common/autotest_common.sh@884 -- # size=4096
00:11:24.949   23:44:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:24.949   23:44:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:24.949   23:44:55	-- common/autotest_common.sh@887 -- # return 0
00:11:24.949   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:24.949   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:24.949    23:44:55	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2
00:11:25.207   23:44:55	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:11:25.207    23:44:55	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:11:25.207   23:44:55	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:11:25.207   23:44:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd5
00:11:25.207   23:44:55	-- common/autotest_common.sh@867 -- # local i
00:11:25.207   23:44:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.207   23:44:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.207   23:44:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions
00:11:25.207   23:44:55	-- common/autotest_common.sh@871 -- # break
00:11:25.207   23:44:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.207   23:44:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.207   23:44:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.207  1+0 records in
00:11:25.207  1+0 records out
00:11:25.207  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066757 s, 6.1 MB/s
00:11:25.207    23:44:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.207   23:44:55	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.207   23:44:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.207   23:44:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.207   23:44:55	-- common/autotest_common.sh@887 -- # return 0
00:11:25.207   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.207   23:44:55	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.207    23:44:55	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3
00:11:25.466   23:44:55	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:11:25.466    23:44:55	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:11:25.466   23:44:55	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:11:25.466   23:44:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd6
00:11:25.466   23:44:55	-- common/autotest_common.sh@867 -- # local i
00:11:25.466   23:44:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.466   23:44:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.466   23:44:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions
00:11:25.466   23:44:55	-- common/autotest_common.sh@871 -- # break
00:11:25.466   23:44:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.466   23:44:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.466   23:44:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.466  1+0 records in
00:11:25.466  1+0 records out
00:11:25.466  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522304 s, 7.8 MB/s
00:11:25.466    23:44:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.466   23:44:56	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.466   23:44:56	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.466   23:44:56	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.466   23:44:56	-- common/autotest_common.sh@887 -- # return 0
00:11:25.466   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.466   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.466    23:44:56	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4
00:11:25.725   23:44:56	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7
00:11:25.725    23:44:56	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd7
00:11:25.725   23:44:56	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd7
00:11:25.725   23:44:56	-- common/autotest_common.sh@866 -- # local nbd_name=nbd7
00:11:25.725   23:44:56	-- common/autotest_common.sh@867 -- # local i
00:11:25.725   23:44:56	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.725   23:44:56	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.725   23:44:56	-- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions
00:11:25.725   23:44:56	-- common/autotest_common.sh@871 -- # break
00:11:25.725   23:44:56	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.725   23:44:56	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.725   23:44:56	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.725  1+0 records in
00:11:25.725  1+0 records out
00:11:25.725  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755667 s, 5.4 MB/s
00:11:25.725    23:44:56	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.725   23:44:56	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.725   23:44:56	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.725   23:44:56	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.725   23:44:56	-- common/autotest_common.sh@887 -- # return 0
00:11:25.725   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.725   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.725    23:44:56	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5
00:11:25.983   23:44:56	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8
00:11:25.983    23:44:56	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd8
00:11:25.984   23:44:56	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd8
00:11:25.984   23:44:56	-- common/autotest_common.sh@866 -- # local nbd_name=nbd8
00:11:25.984   23:44:56	-- common/autotest_common.sh@867 -- # local i
00:11:25.984   23:44:56	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.984   23:44:56	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.984   23:44:56	-- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions
00:11:25.984   23:44:56	-- common/autotest_common.sh@871 -- # break
00:11:25.984   23:44:56	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.984   23:44:56	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.984   23:44:56	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.984  1+0 records in
00:11:25.984  1+0 records out
00:11:25.984  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598126 s, 6.8 MB/s
00:11:25.984    23:44:56	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.984   23:44:56	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.984   23:44:56	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.984   23:44:56	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.984   23:44:56	-- common/autotest_common.sh@887 -- # return 0
00:11:25.984   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.984   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.984    23:44:56	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6
00:11:26.242   23:44:56	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9
00:11:26.242    23:44:56	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd9
00:11:26.242   23:44:56	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd9
00:11:26.242   23:44:56	-- common/autotest_common.sh@866 -- # local nbd_name=nbd9
00:11:26.242   23:44:56	-- common/autotest_common.sh@867 -- # local i
00:11:26.242   23:44:56	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.242   23:44:56	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.242   23:44:56	-- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions
00:11:26.242   23:44:56	-- common/autotest_common.sh@871 -- # break
00:11:26.242   23:44:56	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.242   23:44:56	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.242   23:44:56	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.242  1+0 records in
00:11:26.242  1+0 records out
00:11:26.242  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966669 s, 4.2 MB/s
00:11:26.242    23:44:56	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.242   23:44:56	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.242   23:44:56	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.242   23:44:56	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.242   23:44:56	-- common/autotest_common.sh@887 -- # return 0
00:11:26.242   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.242   23:44:56	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.242    23:44:56	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7
00:11:26.501   23:44:57	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10
00:11:26.501    23:44:57	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd10
00:11:26.501   23:44:57	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd10
00:11:26.501   23:44:57	-- common/autotest_common.sh@866 -- # local nbd_name=nbd10
00:11:26.501   23:44:57	-- common/autotest_common.sh@867 -- # local i
00:11:26.501   23:44:57	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.501   23:44:57	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.501   23:44:57	-- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions
00:11:26.501   23:44:57	-- common/autotest_common.sh@871 -- # break
00:11:26.501   23:44:57	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.501   23:44:57	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.501   23:44:57	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.501  1+0 records in
00:11:26.501  1+0 records out
00:11:26.501  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708391 s, 5.8 MB/s
00:11:26.501    23:44:57	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.501   23:44:57	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.501   23:44:57	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.501   23:44:57	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.501   23:44:57	-- common/autotest_common.sh@887 -- # return 0
00:11:26.501   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.501   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.501    23:44:57	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT
00:11:26.760   23:44:57	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11
00:11:26.760    23:44:57	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd11
00:11:26.760   23:44:57	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd11
00:11:26.760   23:44:57	-- common/autotest_common.sh@866 -- # local nbd_name=nbd11
00:11:26.760   23:44:57	-- common/autotest_common.sh@867 -- # local i
00:11:26.760   23:44:57	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.760   23:44:57	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.760   23:44:57	-- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions
00:11:26.760   23:44:57	-- common/autotest_common.sh@871 -- # break
00:11:26.760   23:44:57	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.760   23:44:57	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.760   23:44:57	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.760  1+0 records in
00:11:26.760  1+0 records out
00:11:26.760  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004717 s, 8.7 MB/s
00:11:26.760    23:44:57	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.760   23:44:57	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.760   23:44:57	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.760   23:44:57	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.760   23:44:57	-- common/autotest_common.sh@887 -- # return 0
00:11:26.760   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.760   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.760    23:44:57	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0
00:11:27.018   23:44:57	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12
00:11:27.018    23:44:57	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd12
00:11:27.018   23:44:57	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd12
00:11:27.018   23:44:57	-- common/autotest_common.sh@866 -- # local nbd_name=nbd12
00:11:27.018   23:44:57	-- common/autotest_common.sh@867 -- # local i
00:11:27.018   23:44:57	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.018   23:44:57	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.018   23:44:57	-- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions
00:11:27.018   23:44:57	-- common/autotest_common.sh@871 -- # break
00:11:27.018   23:44:57	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.018   23:44:57	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.018   23:44:57	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.018  1+0 records in
00:11:27.018  1+0 records out
00:11:27.018  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754561 s, 5.4 MB/s
00:11:27.018    23:44:57	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.018   23:44:57	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.018   23:44:57	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.018   23:44:57	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.018   23:44:57	-- common/autotest_common.sh@887 -- # return 0
00:11:27.018   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.018   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.018    23:44:57	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0
00:11:27.276   23:44:57	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13
00:11:27.276    23:44:57	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd13
00:11:27.276   23:44:57	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd13
00:11:27.276   23:44:57	-- common/autotest_common.sh@866 -- # local nbd_name=nbd13
00:11:27.276   23:44:57	-- common/autotest_common.sh@867 -- # local i
00:11:27.277   23:44:57	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.277   23:44:57	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.277   23:44:57	-- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions
00:11:27.277   23:44:57	-- common/autotest_common.sh@871 -- # break
00:11:27.277   23:44:57	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.277   23:44:57	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.277   23:44:57	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.277  1+0 records in
00:11:27.277  1+0 records out
00:11:27.277  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133024 s, 3.1 MB/s
00:11:27.277    23:44:57	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.277   23:44:57	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.277   23:44:57	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.277   23:44:57	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.277   23:44:57	-- common/autotest_common.sh@887 -- # return 0
00:11:27.277   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.277   23:44:57	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.277    23:44:57	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1
00:11:27.843   23:44:58	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14
00:11:27.843    23:44:58	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd14
00:11:27.843   23:44:58	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd14
00:11:27.843   23:44:58	-- common/autotest_common.sh@866 -- # local nbd_name=nbd14
00:11:27.843   23:44:58	-- common/autotest_common.sh@867 -- # local i
00:11:27.843   23:44:58	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.843   23:44:58	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.843   23:44:58	-- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions
00:11:27.843   23:44:58	-- common/autotest_common.sh@871 -- # break
00:11:27.843   23:44:58	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.843   23:44:58	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.843   23:44:58	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.843  1+0 records in
00:11:27.843  1+0 records out
00:11:27.843  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114875 s, 3.6 MB/s
00:11:27.843    23:44:58	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.843   23:44:58	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.844   23:44:58	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.844   23:44:58	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.844   23:44:58	-- common/autotest_common.sh@887 -- # return 0
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.844    23:44:58	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15
00:11:27.844    23:44:58	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd15
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd15
00:11:27.844   23:44:58	-- common/autotest_common.sh@866 -- # local nbd_name=nbd15
00:11:27.844   23:44:58	-- common/autotest_common.sh@867 -- # local i
00:11:27.844   23:44:58	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.844   23:44:58	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.844   23:44:58	-- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions
00:11:27.844   23:44:58	-- common/autotest_common.sh@871 -- # break
00:11:27.844   23:44:58	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.844   23:44:58	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.844   23:44:58	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.844  1+0 records in
00:11:27.844  1+0 records out
00:11:27.844  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122397 s, 3.3 MB/s
00:11:27.844    23:44:58	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.844   23:44:58	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.844   23:44:58	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.844   23:44:58	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.844   23:44:58	-- common/autotest_common.sh@887 -- # return 0
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.844   23:44:58	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.844    23:44:58	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd0",
00:11:28.103      "bdev_name": "Malloc0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd1",
00:11:28.103      "bdev_name": "Malloc1p0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd2",
00:11:28.103      "bdev_name": "Malloc1p1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd3",
00:11:28.103      "bdev_name": "Malloc2p0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd4",
00:11:28.103      "bdev_name": "Malloc2p1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd5",
00:11:28.103      "bdev_name": "Malloc2p2"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd6",
00:11:28.103      "bdev_name": "Malloc2p3"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd7",
00:11:28.103      "bdev_name": "Malloc2p4"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd8",
00:11:28.103      "bdev_name": "Malloc2p5"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd9",
00:11:28.103      "bdev_name": "Malloc2p6"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd10",
00:11:28.103      "bdev_name": "Malloc2p7"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd11",
00:11:28.103      "bdev_name": "TestPT"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd12",
00:11:28.103      "bdev_name": "raid0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd13",
00:11:28.103      "bdev_name": "concat0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd14",
00:11:28.103      "bdev_name": "raid1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd15",
00:11:28.103      "bdev_name": "AIO0"
00:11:28.103    }
00:11:28.103  ]'
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:11:28.103    23:44:58	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:11:28.103    23:44:58	-- bdev/nbd_common.sh@119 -- # echo '[
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd0",
00:11:28.103      "bdev_name": "Malloc0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd1",
00:11:28.103      "bdev_name": "Malloc1p0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd2",
00:11:28.103      "bdev_name": "Malloc1p1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd3",
00:11:28.103      "bdev_name": "Malloc2p0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd4",
00:11:28.103      "bdev_name": "Malloc2p1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd5",
00:11:28.103      "bdev_name": "Malloc2p2"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd6",
00:11:28.103      "bdev_name": "Malloc2p3"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd7",
00:11:28.103      "bdev_name": "Malloc2p4"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd8",
00:11:28.103      "bdev_name": "Malloc2p5"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd9",
00:11:28.103      "bdev_name": "Malloc2p6"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd10",
00:11:28.103      "bdev_name": "Malloc2p7"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd11",
00:11:28.103      "bdev_name": "TestPT"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd12",
00:11:28.103      "bdev_name": "raid0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd13",
00:11:28.103      "bdev_name": "concat0"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd14",
00:11:28.103      "bdev_name": "raid1"
00:11:28.103    },
00:11:28.103    {
00:11:28.103      "nbd_device": "/dev/nbd15",
00:11:28.103      "bdev_name": "AIO0"
00:11:28.103    }
00:11:28.103  ]'
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15'
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15')
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@51 -- # local i
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:28.103   23:44:58	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:28.362    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@41 -- # break
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@45 -- # return 0
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:28.362   23:44:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:28.621    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@41 -- # break
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@45 -- # return 0
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:28.621   23:44:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:11:28.880    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@41 -- # break
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@45 -- # return 0
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:28.880   23:44:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:11:29.138    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@41 -- # break
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@45 -- # return 0
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:11:29.138    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@41 -- # break
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@45 -- # return 0
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.138   23:44:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:11:29.397    23:44:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@41 -- # break
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.397   23:45:00	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:11:29.655    23:45:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@41 -- # break
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.655   23:45:00	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:11:29.914    23:45:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@41 -- # break
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.914   23:45:00	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:11:30.173    23:45:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@41 -- # break
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.173   23:45:00	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:11:30.432    23:45:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@41 -- # break
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.432   23:45:00	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:11:30.691    23:45:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@41 -- # break
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:11:30.691    23:45:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@41 -- # break
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.691   23:45:01	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:11:30.950    23:45:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@41 -- # break
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.950   23:45:01	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:11:31.208    23:45:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@41 -- # break
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.208   23:45:01	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:11:31.467    23:45:02	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@41 -- # break
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.467   23:45:02	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:11:31.726    23:45:02	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@41 -- # break
00:11:31.726   23:45:02	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.726    23:45:02	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:31.726    23:45:02	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:31.726     23:45:02	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:31.985    23:45:02	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:31.985     23:45:02	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:31.985     23:45:02	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:31.985    23:45:02	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:31.985     23:45:02	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:31.985     23:45:02	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:31.985     23:45:02	-- bdev/nbd_common.sh@65 -- # true
00:11:31.985    23:45:02	-- bdev/nbd_common.sh@65 -- # count=0
00:11:31.985    23:45:02	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@122 -- # count=0
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@127 -- # return 0
00:11:31.985   23:45:02	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@12 -- # local i
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:31.985   23:45:02	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:11:32.244  /dev/nbd0
00:11:32.244    23:45:02	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:11:32.244   23:45:02	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:11:32.244   23:45:02	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:32.244   23:45:02	-- common/autotest_common.sh@867 -- # local i
00:11:32.244   23:45:02	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:32.244   23:45:02	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:32.244   23:45:02	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:32.244   23:45:02	-- common/autotest_common.sh@871 -- # break
00:11:32.244   23:45:02	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:32.244   23:45:02	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:32.244   23:45:02	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:32.244  1+0 records in
00:11:32.244  1+0 records out
00:11:32.244  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522164 s, 7.8 MB/s
00:11:32.244    23:45:02	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.244   23:45:02	-- common/autotest_common.sh@884 -- # size=4096
00:11:32.244   23:45:02	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.244   23:45:02	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:32.244   23:45:02	-- common/autotest_common.sh@887 -- # return 0
00:11:32.244   23:45:02	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:32.244   23:45:02	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:32.244   23:45:02	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1
00:11:32.244  /dev/nbd1
00:11:32.503    23:45:02	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:11:32.503   23:45:02	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:11:32.503   23:45:02	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:32.503   23:45:02	-- common/autotest_common.sh@867 -- # local i
00:11:32.503   23:45:02	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:32.503   23:45:02	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:32.503   23:45:02	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:32.503   23:45:02	-- common/autotest_common.sh@871 -- # break
00:11:32.503   23:45:02	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:32.503   23:45:02	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:32.503   23:45:02	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:32.503  1+0 records in
00:11:32.503  1+0 records out
00:11:32.503  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457641 s, 9.0 MB/s
00:11:32.503    23:45:03	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.503   23:45:03	-- common/autotest_common.sh@884 -- # size=4096
00:11:32.503   23:45:03	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.503   23:45:03	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:32.503   23:45:03	-- common/autotest_common.sh@887 -- # return 0
00:11:32.503   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:32.503   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:32.503   23:45:03	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10
00:11:32.503  /dev/nbd10
00:11:32.503    23:45:03	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:11:32.503   23:45:03	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:11:32.503   23:45:03	-- common/autotest_common.sh@866 -- # local nbd_name=nbd10
00:11:32.503   23:45:03	-- common/autotest_common.sh@867 -- # local i
00:11:32.503   23:45:03	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:32.503   23:45:03	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:32.503   23:45:03	-- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions
00:11:32.503   23:45:03	-- common/autotest_common.sh@871 -- # break
00:11:32.503   23:45:03	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:32.503   23:45:03	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:32.503   23:45:03	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:32.503  1+0 records in
00:11:32.503  1+0 records out
00:11:32.762  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870632 s, 4.7 MB/s
00:11:32.762    23:45:03	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.762   23:45:03	-- common/autotest_common.sh@884 -- # size=4096
00:11:32.762   23:45:03	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:32.762   23:45:03	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:32.762   23:45:03	-- common/autotest_common.sh@887 -- # return 0
00:11:32.762   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:32.762   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:32.762   23:45:03	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11
00:11:33.021  /dev/nbd11
00:11:33.021    23:45:03	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:11:33.021   23:45:03	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:11:33.021   23:45:03	-- common/autotest_common.sh@866 -- # local nbd_name=nbd11
00:11:33.021   23:45:03	-- common/autotest_common.sh@867 -- # local i
00:11:33.021   23:45:03	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:33.021   23:45:03	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:33.021   23:45:03	-- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions
00:11:33.021   23:45:03	-- common/autotest_common.sh@871 -- # break
00:11:33.021   23:45:03	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:33.021   23:45:03	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:33.021   23:45:03	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:33.021  1+0 records in
00:11:33.021  1+0 records out
00:11:33.021  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084988 s, 4.8 MB/s
00:11:33.021    23:45:03	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.021   23:45:03	-- common/autotest_common.sh@884 -- # size=4096
00:11:33.021   23:45:03	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.021   23:45:03	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:33.021   23:45:03	-- common/autotest_common.sh@887 -- # return 0
00:11:33.021   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:33.021   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:33.021   23:45:03	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12
00:11:33.021  /dev/nbd12
00:11:33.021    23:45:03	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:11:33.280   23:45:03	-- common/autotest_common.sh@866 -- # local nbd_name=nbd12
00:11:33.280   23:45:03	-- common/autotest_common.sh@867 -- # local i
00:11:33.280   23:45:03	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions
00:11:33.280   23:45:03	-- common/autotest_common.sh@871 -- # break
00:11:33.280   23:45:03	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:33.280  1+0 records in
00:11:33.280  1+0 records out
00:11:33.280  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620711 s, 6.6 MB/s
00:11:33.280    23:45:03	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.280   23:45:03	-- common/autotest_common.sh@884 -- # size=4096
00:11:33.280   23:45:03	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.280   23:45:03	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:33.280   23:45:03	-- common/autotest_common.sh@887 -- # return 0
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13
00:11:33.280  /dev/nbd13
00:11:33.280    23:45:03	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:11:33.280   23:45:03	-- common/autotest_common.sh@866 -- # local nbd_name=nbd13
00:11:33.280   23:45:03	-- common/autotest_common.sh@867 -- # local i
00:11:33.280   23:45:03	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions
00:11:33.280   23:45:03	-- common/autotest_common.sh@871 -- # break
00:11:33.280   23:45:03	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:33.280   23:45:03	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:33.280  1+0 records in
00:11:33.280  1+0 records out
00:11:33.280  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529927 s, 7.7 MB/s
00:11:33.280    23:45:03	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.280   23:45:03	-- common/autotest_common.sh@884 -- # size=4096
00:11:33.280   23:45:03	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.280   23:45:03	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:33.280   23:45:03	-- common/autotest_common.sh@887 -- # return 0
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:33.280   23:45:03	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14
00:11:33.539  /dev/nbd14
00:11:33.539    23:45:04	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:11:33.539   23:45:04	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:11:33.539   23:45:04	-- common/autotest_common.sh@866 -- # local nbd_name=nbd14
00:11:33.539   23:45:04	-- common/autotest_common.sh@867 -- # local i
00:11:33.539   23:45:04	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:33.539   23:45:04	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:33.539   23:45:04	-- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions
00:11:33.539   23:45:04	-- common/autotest_common.sh@871 -- # break
00:11:33.539   23:45:04	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:33.539   23:45:04	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:33.539   23:45:04	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:33.539  1+0 records in
00:11:33.539  1+0 records out
00:11:33.539  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537572 s, 7.6 MB/s
00:11:33.539    23:45:04	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.539   23:45:04	-- common/autotest_common.sh@884 -- # size=4096
00:11:33.539   23:45:04	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:33.539   23:45:04	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:33.539   23:45:04	-- common/autotest_common.sh@887 -- # return 0
00:11:33.539   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:33.539   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:33.539   23:45:04	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15
00:11:33.798  /dev/nbd15
00:11:33.798    23:45:04	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd15
00:11:33.798   23:45:04	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd15
00:11:33.798   23:45:04	-- common/autotest_common.sh@866 -- # local nbd_name=nbd15
00:11:33.798   23:45:04	-- common/autotest_common.sh@867 -- # local i
00:11:33.798   23:45:04	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:33.798   23:45:04	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:33.798   23:45:04	-- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions
00:11:33.798   23:45:04	-- common/autotest_common.sh@871 -- # break
00:11:33.798   23:45:04	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:33.798   23:45:04	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:33.798   23:45:04	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:33.798  1+0 records in
00:11:33.798  1+0 records out
00:11:33.798  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689253 s, 5.9 MB/s
00:11:34.057    23:45:04	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.057   23:45:04	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.057   23:45:04	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.057   23:45:04	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.057   23:45:04	-- common/autotest_common.sh@887 -- # return 0
00:11:34.057   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.057   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.057   23:45:04	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2
00:11:34.316  /dev/nbd2
00:11:34.316    23:45:04	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd2
00:11:34.316   23:45:04	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd2
00:11:34.316   23:45:04	-- common/autotest_common.sh@866 -- # local nbd_name=nbd2
00:11:34.316   23:45:04	-- common/autotest_common.sh@867 -- # local i
00:11:34.316   23:45:04	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.316   23:45:04	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.316   23:45:04	-- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions
00:11:34.316   23:45:04	-- common/autotest_common.sh@871 -- # break
00:11:34.316   23:45:04	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:34.316   23:45:04	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:34.316   23:45:04	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:34.316  1+0 records in
00:11:34.316  1+0 records out
00:11:34.316  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683934 s, 6.0 MB/s
00:11:34.316    23:45:04	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.316   23:45:04	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.316   23:45:04	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.316   23:45:04	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.316   23:45:04	-- common/autotest_common.sh@887 -- # return 0
00:11:34.316   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.316   23:45:04	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.316   23:45:04	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3
00:11:34.574  /dev/nbd3
00:11:34.574    23:45:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd3
00:11:34.574   23:45:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd3
00:11:34.574   23:45:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd3
00:11:34.574   23:45:05	-- common/autotest_common.sh@867 -- # local i
00:11:34.574   23:45:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.574   23:45:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.574   23:45:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions
00:11:34.574   23:45:05	-- common/autotest_common.sh@871 -- # break
00:11:34.574   23:45:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:34.574   23:45:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:34.574   23:45:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:34.574  1+0 records in
00:11:34.574  1+0 records out
00:11:34.574  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752994 s, 5.4 MB/s
00:11:34.574    23:45:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.574   23:45:05	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.574   23:45:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.574   23:45:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.574   23:45:05	-- common/autotest_common.sh@887 -- # return 0
00:11:34.574   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.574   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.574   23:45:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4
00:11:34.574  /dev/nbd4
00:11:34.832    23:45:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd4
00:11:34.832   23:45:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd4
00:11:34.832   23:45:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd4
00:11:34.832   23:45:05	-- common/autotest_common.sh@867 -- # local i
00:11:34.832   23:45:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions
00:11:34.832   23:45:05	-- common/autotest_common.sh@871 -- # break
00:11:34.832   23:45:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:34.832  1+0 records in
00:11:34.832  1+0 records out
00:11:34.832  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776346 s, 5.3 MB/s
00:11:34.832    23:45:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.832   23:45:05	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.832   23:45:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.832   23:45:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.832   23:45:05	-- common/autotest_common.sh@887 -- # return 0
00:11:34.832   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.832   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.832   23:45:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5
00:11:34.832  /dev/nbd5
00:11:34.832    23:45:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd5
00:11:34.832   23:45:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd5
00:11:34.832   23:45:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd5
00:11:34.832   23:45:05	-- common/autotest_common.sh@867 -- # local i
00:11:34.832   23:45:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.832   23:45:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions
00:11:35.097   23:45:05	-- common/autotest_common.sh@871 -- # break
00:11:35.097   23:45:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.097   23:45:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.097   23:45:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.097  1+0 records in
00:11:35.097  1+0 records out
00:11:35.097  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850599 s, 4.8 MB/s
00:11:35.097    23:45:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.097   23:45:05	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.097   23:45:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.097   23:45:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.097   23:45:05	-- common/autotest_common.sh@887 -- # return 0
00:11:35.097   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.097   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.097   23:45:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6
00:11:35.357  /dev/nbd6
00:11:35.357    23:45:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd6
00:11:35.357   23:45:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd6
00:11:35.357   23:45:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd6
00:11:35.357   23:45:05	-- common/autotest_common.sh@867 -- # local i
00:11:35.357   23:45:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.357   23:45:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.357   23:45:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions
00:11:35.357   23:45:05	-- common/autotest_common.sh@871 -- # break
00:11:35.357   23:45:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.357   23:45:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.357   23:45:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.357  1+0 records in
00:11:35.357  1+0 records out
00:11:35.357  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717633 s, 5.7 MB/s
00:11:35.357    23:45:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.357   23:45:05	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.357   23:45:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.357   23:45:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.358   23:45:05	-- common/autotest_common.sh@887 -- # return 0
00:11:35.358   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.358   23:45:05	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.358   23:45:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7
00:11:35.358  /dev/nbd7
00:11:35.358    23:45:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd7
00:11:35.616   23:45:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd7
00:11:35.616   23:45:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd7
00:11:35.616   23:45:06	-- common/autotest_common.sh@867 -- # local i
00:11:35.616   23:45:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.616   23:45:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.616   23:45:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions
00:11:35.616   23:45:06	-- common/autotest_common.sh@871 -- # break
00:11:35.616   23:45:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.616   23:45:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.616   23:45:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.616  1+0 records in
00:11:35.616  1+0 records out
00:11:35.616  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081165 s, 5.0 MB/s
00:11:35.616    23:45:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.616   23:45:06	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.617   23:45:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.617   23:45:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.617   23:45:06	-- common/autotest_common.sh@887 -- # return 0
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8
00:11:35.617  /dev/nbd8
00:11:35.617    23:45:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd8
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd8
00:11:35.617   23:45:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd8
00:11:35.617   23:45:06	-- common/autotest_common.sh@867 -- # local i
00:11:35.617   23:45:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.617   23:45:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.617   23:45:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions
00:11:35.617   23:45:06	-- common/autotest_common.sh@871 -- # break
00:11:35.617   23:45:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.617   23:45:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.617   23:45:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.617  1+0 records in
00:11:35.617  1+0 records out
00:11:35.617  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987055 s, 4.1 MB/s
00:11:35.617    23:45:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.617   23:45:06	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.617   23:45:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.617   23:45:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.617   23:45:06	-- common/autotest_common.sh@887 -- # return 0
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.617   23:45:06	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9
00:11:35.875  /dev/nbd9
00:11:35.876    23:45:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd9
00:11:35.876   23:45:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd9
00:11:35.876   23:45:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd9
00:11:35.876   23:45:06	-- common/autotest_common.sh@867 -- # local i
00:11:35.876   23:45:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.876   23:45:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.876   23:45:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions
00:11:35.876   23:45:06	-- common/autotest_common.sh@871 -- # break
00:11:35.876   23:45:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.876   23:45:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.876   23:45:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.876  1+0 records in
00:11:35.876  1+0 records out
00:11:35.876  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117892 s, 3.5 MB/s
00:11:35.876    23:45:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.876   23:45:06	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.876   23:45:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.876   23:45:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.876   23:45:06	-- common/autotest_common.sh@887 -- # return 0
00:11:35.876   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.876   23:45:06	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.876    23:45:06	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:35.876    23:45:06	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:35.876     23:45:06	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:36.134    23:45:06	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd0",
00:11:36.134      "bdev_name": "Malloc0"
00:11:36.134    },
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd1",
00:11:36.134      "bdev_name": "Malloc1p0"
00:11:36.134    },
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd10",
00:11:36.134      "bdev_name": "Malloc1p1"
00:11:36.134    },
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd11",
00:11:36.134      "bdev_name": "Malloc2p0"
00:11:36.134    },
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd12",
00:11:36.134      "bdev_name": "Malloc2p1"
00:11:36.134    },
00:11:36.134    {
00:11:36.134      "nbd_device": "/dev/nbd13",
00:11:36.135      "bdev_name": "Malloc2p2"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd14",
00:11:36.135      "bdev_name": "Malloc2p3"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd15",
00:11:36.135      "bdev_name": "Malloc2p4"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd2",
00:11:36.135      "bdev_name": "Malloc2p5"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd3",
00:11:36.135      "bdev_name": "Malloc2p6"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd4",
00:11:36.135      "bdev_name": "Malloc2p7"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd5",
00:11:36.135      "bdev_name": "TestPT"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd6",
00:11:36.135      "bdev_name": "raid0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd7",
00:11:36.135      "bdev_name": "concat0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd8",
00:11:36.135      "bdev_name": "raid1"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd9",
00:11:36.135      "bdev_name": "AIO0"
00:11:36.135    }
00:11:36.135  ]'
00:11:36.135     23:45:06	-- bdev/nbd_common.sh@64 -- # echo '[
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd0",
00:11:36.135      "bdev_name": "Malloc0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd1",
00:11:36.135      "bdev_name": "Malloc1p0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd10",
00:11:36.135      "bdev_name": "Malloc1p1"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd11",
00:11:36.135      "bdev_name": "Malloc2p0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd12",
00:11:36.135      "bdev_name": "Malloc2p1"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd13",
00:11:36.135      "bdev_name": "Malloc2p2"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd14",
00:11:36.135      "bdev_name": "Malloc2p3"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd15",
00:11:36.135      "bdev_name": "Malloc2p4"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd2",
00:11:36.135      "bdev_name": "Malloc2p5"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd3",
00:11:36.135      "bdev_name": "Malloc2p6"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd4",
00:11:36.135      "bdev_name": "Malloc2p7"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd5",
00:11:36.135      "bdev_name": "TestPT"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd6",
00:11:36.135      "bdev_name": "raid0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd7",
00:11:36.135      "bdev_name": "concat0"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd8",
00:11:36.135      "bdev_name": "raid1"
00:11:36.135    },
00:11:36.135    {
00:11:36.135      "nbd_device": "/dev/nbd9",
00:11:36.135      "bdev_name": "AIO0"
00:11:36.135    }
00:11:36.135  ]'
00:11:36.135     23:45:06	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:36.135    23:45:06	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:11:36.135  /dev/nbd1
00:11:36.135  /dev/nbd10
00:11:36.135  /dev/nbd11
00:11:36.135  /dev/nbd12
00:11:36.135  /dev/nbd13
00:11:36.135  /dev/nbd14
00:11:36.135  /dev/nbd15
00:11:36.135  /dev/nbd2
00:11:36.135  /dev/nbd3
00:11:36.135  /dev/nbd4
00:11:36.135  /dev/nbd5
00:11:36.135  /dev/nbd6
00:11:36.135  /dev/nbd7
00:11:36.135  /dev/nbd8
00:11:36.135  /dev/nbd9'
00:11:36.135     23:45:06	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:11:36.135  /dev/nbd1
00:11:36.135  /dev/nbd10
00:11:36.135  /dev/nbd11
00:11:36.135  /dev/nbd12
00:11:36.135  /dev/nbd13
00:11:36.135  /dev/nbd14
00:11:36.135  /dev/nbd15
00:11:36.135  /dev/nbd2
00:11:36.135  /dev/nbd3
00:11:36.135  /dev/nbd4
00:11:36.135  /dev/nbd5
00:11:36.135  /dev/nbd6
00:11:36.135  /dev/nbd7
00:11:36.135  /dev/nbd8
00:11:36.135  /dev/nbd9'
00:11:36.135     23:45:06	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:36.135    23:45:06	-- bdev/nbd_common.sh@65 -- # count=16
00:11:36.135    23:45:06	-- bdev/nbd_common.sh@66 -- # echo 16
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@95 -- # count=16
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']'
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@71 -- # local operation=write
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:11:36.135  256+0 records in
00:11:36.135  256+0 records out
00:11:36.135  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113047 s, 92.8 MB/s
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.135   23:45:06	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:11:36.394  256+0 records in
00:11:36.394  256+0 records out
00:11:36.394  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130032 s, 8.1 MB/s
00:11:36.394   23:45:06	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.394   23:45:06	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:11:36.653  256+0 records in
00:11:36.653  256+0 records out
00:11:36.653  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14001 s, 7.5 MB/s
00:11:36.653   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.653   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:11:36.653  256+0 records in
00:11:36.653  256+0 records out
00:11:36.653  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146283 s, 7.2 MB/s
00:11:36.653   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.653   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:11:36.912  256+0 records in
00:11:36.912  256+0 records out
00:11:36.912  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138675 s, 7.6 MB/s
00:11:36.912   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.912   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:11:36.912  256+0 records in
00:11:36.912  256+0 records out
00:11:36.912  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138569 s, 7.6 MB/s
00:11:36.912   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:36.912   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:11:37.170  256+0 records in
00:11:37.170  256+0 records out
00:11:37.170  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137797 s, 7.6 MB/s
00:11:37.170   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.170   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:11:37.170  256+0 records in
00:11:37.170  256+0 records out
00:11:37.170  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135293 s, 7.8 MB/s
00:11:37.170   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.170   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct
00:11:37.430  256+0 records in
00:11:37.430  256+0 records out
00:11:37.430  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126684 s, 8.3 MB/s
00:11:37.430   23:45:07	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.430   23:45:07	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct
00:11:37.430  256+0 records in
00:11:37.430  256+0 records out
00:11:37.430  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134598 s, 7.8 MB/s
00:11:37.430   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.430   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct
00:11:37.688  256+0 records in
00:11:37.688  256+0 records out
00:11:37.688  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140355 s, 7.5 MB/s
00:11:37.688   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.688   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct
00:11:37.688  256+0 records in
00:11:37.688  256+0 records out
00:11:37.688  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131452 s, 8.0 MB/s
00:11:37.688   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.688   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct
00:11:37.947  256+0 records in
00:11:37.947  256+0 records out
00:11:37.947  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139544 s, 7.5 MB/s
00:11:37.947   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:37.947   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct
00:11:38.206  256+0 records in
00:11:38.206  256+0 records out
00:11:38.206  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137509 s, 7.6 MB/s
00:11:38.206   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:38.206   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct
00:11:38.206  256+0 records in
00:11:38.206  256+0 records out
00:11:38.206  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140681 s, 7.5 MB/s
00:11:38.206   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:38.206   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct
00:11:38.465  256+0 records in
00:11:38.465  256+0 records out
00:11:38.465  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144741 s, 7.2 MB/s
00:11:38.465   23:45:08	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:38.465   23:45:08	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct
00:11:38.465  256+0 records in
00:11:38.465  256+0 records out
00:11:38.465  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207076 s, 5.1 MB/s
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:11:38.465   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.466   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:11:38.466   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.724   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@51 -- # local i
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:38.725   23:45:09	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:38.984    23:45:09	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@41 -- # break
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@45 -- # return 0
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:38.984   23:45:09	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:39.242    23:45:09	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:39.242   23:45:09	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:39.242   23:45:09	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:39.242   23:45:09	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:39.242   23:45:09	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:39.243   23:45:09	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:39.243   23:45:09	-- bdev/nbd_common.sh@41 -- # break
00:11:39.243   23:45:09	-- bdev/nbd_common.sh@45 -- # return 0
00:11:39.243   23:45:09	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:39.243   23:45:09	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:11:39.501    23:45:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@41 -- # break
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@45 -- # return 0
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:39.501   23:45:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:11:39.760    23:45:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@41 -- # break
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@45 -- # return 0
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:39.760   23:45:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:11:40.019    23:45:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@41 -- # break
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@45 -- # return 0
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:40.019   23:45:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:11:40.278    23:45:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:11:40.278   23:45:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:11:40.278   23:45:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@41 -- # break
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@45 -- # return 0
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:40.279   23:45:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:11:40.537    23:45:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@41 -- # break
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@45 -- # return 0
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:40.537   23:45:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:11:40.796    23:45:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@41 -- # break
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@45 -- # return 0
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:40.796   23:45:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:11:41.055    23:45:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@41 -- # break
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@45 -- # return 0
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:11:41.055    23:45:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@41 -- # break
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@45 -- # return 0
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:41.055   23:45:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:11:41.314    23:45:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@41 -- # break
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@45 -- # return 0
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:41.314   23:45:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:11:41.573    23:45:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@41 -- # break
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@45 -- # return 0
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:41.573   23:45:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:11:41.832    23:45:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@41 -- # break
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@45 -- # return 0
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:41.832   23:45:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:11:42.091    23:45:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@41 -- # break
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@45 -- # return 0
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:42.091   23:45:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:11:42.091    23:45:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@41 -- # break
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@45 -- # return 0
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:42.350   23:45:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:11:42.350    23:45:13	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:11:42.350   23:45:13	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:11:42.350   23:45:13	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:11:42.350   23:45:13	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:42.350   23:45:13	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:42.350   23:45:13	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:11:42.608   23:45:13	-- bdev/nbd_common.sh@41 -- # break
00:11:42.608   23:45:13	-- bdev/nbd_common.sh@45 -- # return 0
00:11:42.608    23:45:13	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:42.608    23:45:13	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:42.608     23:45:13	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:42.608    23:45:13	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:42.867     23:45:13	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:42.867     23:45:13	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:42.867    23:45:13	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:42.867     23:45:13	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:42.867     23:45:13	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:42.867     23:45:13	-- bdev/nbd_common.sh@65 -- # true
00:11:42.867    23:45:13	-- bdev/nbd_common.sh@65 -- # count=0
00:11:42.867    23:45:13	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@104 -- # count=0
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@109 -- # return 0
00:11:42.867   23:45:13	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:11:42.867   23:45:13	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:11:43.126  malloc_lvol_verify
00:11:43.126   23:45:13	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:11:43.384  ac0a5521-c5f0-44db-ab7d-65afea59d1a5
00:11:43.384   23:45:13	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:11:43.384  3a73b0df-323b-4b24-b38b-7a4956f2ec96
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:11:43.643  /dev/nbd0
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:11:43.643  mke2fs 1.46.5 (30-Dec-2021)
00:11:43.643  
00:11:43.643  Filesystem too small for a journal
00:11:43.643  Discarding device blocks:    0/1024         done                            
00:11:43.643  Creating filesystem with 1024 4k blocks and 1024 inodes
00:11:43.643  
00:11:43.643  Allocating group tables: 0/1   done                            
00:11:43.643  Writing inode tables: 0/1   done                            
00:11:43.643  Writing superblocks and filesystem accounting information: 0/1   done
00:11:43.643  
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@51 -- # local i
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:43.643   23:45:14	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:43.902    23:45:14	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@41 -- # break
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@45 -- # return 0
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:11:43.902   23:45:14	-- bdev/nbd_common.sh@147 -- # return 0
00:11:43.902   23:45:14	-- bdev/blockdev.sh@324 -- # killprocess 108776
00:11:43.902   23:45:14	-- common/autotest_common.sh@936 -- # '[' -z 108776 ']'
00:11:43.902   23:45:14	-- common/autotest_common.sh@940 -- # kill -0 108776
00:11:43.902    23:45:14	-- common/autotest_common.sh@941 -- # uname
00:11:43.902   23:45:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:43.902    23:45:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108776
00:11:43.902   23:45:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:43.902   23:45:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:43.902   23:45:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 108776'
00:11:43.902  killing process with pid 108776
00:11:43.902   23:45:14	-- common/autotest_common.sh@955 -- # kill 108776
00:11:43.902   23:45:14	-- common/autotest_common.sh@960 -- # wait 108776
00:11:45.804  ************************************
00:11:45.804  END TEST bdev_nbd
00:11:45.804  ************************************
00:11:45.805   23:45:16	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:11:45.805  
00:11:45.805  real	0m24.024s
00:11:45.805  user	0m32.995s
00:11:45.805  sys	0m8.071s
00:11:45.805   23:45:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:45.805   23:45:16	-- common/autotest_common.sh@10 -- # set +x
00:11:45.805   23:45:16	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:11:45.805   23:45:16	-- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite ''
00:11:45.805   23:45:16	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:45.805   23:45:16	-- common/autotest_common.sh@10 -- # set +x
00:11:45.805  ************************************
00:11:45.805  START TEST bdev_fio
00:11:45.805  ************************************
00:11:45.805   23:45:16	-- common/autotest_common.sh@1114 -- # fio_test_suite ''
00:11:45.805   23:45:16	-- bdev/blockdev.sh@329 -- # local env_context
00:11:45.805   23:45:16	-- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:11:45.805  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:11:45.805   23:45:16	-- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:11:45.805    23:45:16	-- bdev/blockdev.sh@337 -- # echo ''
00:11:45.805    23:45:16	-- bdev/blockdev.sh@337 -- # sed s/--env-context=//
00:11:45.805   23:45:16	-- bdev/blockdev.sh@337 -- # env_context=
00:11:45.805   23:45:16	-- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:11:45.805   23:45:16	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:45.805   23:45:16	-- common/autotest_common.sh@1270 -- # local workload=verify
00:11:45.805   23:45:16	-- common/autotest_common.sh@1271 -- # local bdev_type=AIO
00:11:45.805   23:45:16	-- common/autotest_common.sh@1272 -- # local env_context=
00:11:45.805   23:45:16	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:11:45.805   23:45:16	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1280 -- # '[' -z verify ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:45.805   23:45:16	-- common/autotest_common.sh@1290 -- # cat
00:11:45.805   23:45:16	-- common/autotest_common.sh@1302 -- # '[' verify == verify ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1303 -- # cat
00:11:45.805   23:45:16	-- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']'
00:11:45.805    23:45:16	-- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version
00:11:45.805   23:45:16	-- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:11:45.805   23:45:16	-- common/autotest_common.sh@1314 -- # echo serialize_overlap=1
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_TestPT]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=TestPT
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_raid0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=raid0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_concat0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=concat0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_raid1]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=raid1
00:11:45.805   23:45:16	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:45.805   23:45:16	-- bdev/blockdev.sh@340 -- # echo '[job_AIO0]'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@341 -- # echo filename=AIO0
00:11:45.805   23:45:16	-- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:11:45.805   23:45:16	-- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:45.805   23:45:16	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:45.805   23:45:16	-- common/autotest_common.sh@10 -- # set +x
00:11:45.805  ************************************
00:11:45.805  START TEST bdev_fio_rw_verify
00:11:45.805  ************************************
00:11:45.805   23:45:16	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:45.805   23:45:16	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:45.805   23:45:16	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:11:45.805   23:45:16	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:45.805   23:45:16	-- common/autotest_common.sh@1328 -- # local sanitizers
00:11:45.805   23:45:16	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:45.805   23:45:16	-- common/autotest_common.sh@1330 -- # shift
00:11:45.805   23:45:16	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:11:45.805   23:45:16	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:11:45.805    23:45:16	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:45.805    23:45:16	-- common/autotest_common.sh@1334 -- # grep libasan
00:11:45.805    23:45:16	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:11:45.805   23:45:16	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:11:45.805   23:45:16	-- common/autotest_common.sh@1336 -- # break
00:11:45.805   23:45:16	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:11:45.805   23:45:16	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:46.064  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.064  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.065  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.065  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:46.065  fio-3.35
00:11:46.065  Starting 16 threads
00:11:58.268  
00:11:58.268  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=109931: Fri Dec 13 23:45:28 2024
00:11:58.268    read: IOPS=82.6k, BW=322MiB/s (338MB/s)(3225MiB/10001msec)
00:11:58.268      slat (usec): min=2, max=37836, avg=32.76, stdev=421.42
00:11:58.268      clat (usec): min=8, max=44240, avg=268.10, stdev=1224.42
00:11:58.268       lat (usec): min=22, max=44263, avg=300.86, stdev=1294.82
00:11:58.268      clat percentiles (usec):
00:11:58.268       | 50.000th=[  161], 99.000th=[  619], 99.900th=[16319], 99.990th=[24249],
00:11:58.268       | 99.999th=[44303]
00:11:58.268    write: IOPS=130k, BW=508MiB/s (532MB/s)(5017MiB/9883msec); 0 zone resets
00:11:58.268      slat (usec): min=5, max=62229, avg=62.44, stdev=643.25
00:11:58.268      clat (usec): min=8, max=62518, avg=357.25, stdev=1485.51
00:11:58.268       lat (usec): min=26, max=62547, avg=419.69, stdev=1619.22
00:11:58.268      clat percentiles (usec):
00:11:58.268       | 50.000th=[  204], 99.000th=[ 3785], 99.900th=[18220], 99.990th=[32113],
00:11:58.268       | 99.999th=[46924]
00:11:58.268     bw (  KiB/s): min=315528, max=835672, per=99.27%, avg=515998.79, stdev=8818.66, samples=304
00:11:58.268     iops        : min=78882, max=208918, avg=128999.63, stdev=2204.67, samples=304
00:11:58.268    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.96%, 100=14.35%, 250=59.27%
00:11:58.268    lat (usec)   : 500=22.45%, 750=1.75%, 1000=0.15%
00:11:58.268    lat (msec)   : 2=0.11%, 4=0.08%, 10=0.19%, 20=0.60%, 50=0.08%
00:11:58.268    lat (msec)   : 100=0.01%
00:11:58.268    cpu          : usr=56.52%, sys=1.92%, ctx=226349, majf=2, minf=89033
00:11:58.268    IO depths    : 1=11.3%, 2=23.5%, 4=51.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:58.268       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:58.268       complete  : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:58.268       issued rwts: total=825676,1284253,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:58.268       latency   : target=0, window=0, percentile=100.00%, depth=8
00:11:58.268  
00:11:58.268  Run status group 0 (all jobs):
00:11:58.268     READ: bw=322MiB/s (338MB/s), 322MiB/s-322MiB/s (338MB/s-338MB/s), io=3225MiB (3382MB), run=10001-10001msec
00:11:58.268    WRITE: bw=508MiB/s (532MB/s), 508MiB/s-508MiB/s (532MB/s-532MB/s), io=5017MiB (5260MB), run=9883-9883msec
00:11:59.645  -----------------------------------------------------
00:11:59.645  Suppressions used:
00:11:59.645    count      bytes template
00:11:59.645       16        140 /usr/src/fio/parse.c
00:11:59.645    13495    1295520 /usr/src/fio/iolog.c
00:11:59.645        1        904 libcrypto.so
00:11:59.645  -----------------------------------------------------
00:11:59.645  
00:11:59.905  ************************************
00:11:59.905  END TEST bdev_fio_rw_verify
00:11:59.905  ************************************
00:11:59.905  
00:11:59.905  real	0m13.929s
00:11:59.905  user	1m35.816s
00:11:59.905  sys	0m4.101s
00:11:59.905   23:45:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:59.905   23:45:30	-- common/autotest_common.sh@10 -- # set +x
00:11:59.905   23:45:30	-- bdev/blockdev.sh@348 -- # rm -f
00:11:59.905   23:45:30	-- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:59.905   23:45:30	-- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:11:59.905   23:45:30	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:59.905   23:45:30	-- common/autotest_common.sh@1270 -- # local workload=trim
00:11:59.905   23:45:30	-- common/autotest_common.sh@1271 -- # local bdev_type=
00:11:59.905   23:45:30	-- common/autotest_common.sh@1272 -- # local env_context=
00:11:59.905   23:45:30	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:11:59.905   23:45:30	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:11:59.905   23:45:30	-- common/autotest_common.sh@1280 -- # '[' -z trim ']'
00:11:59.905   23:45:30	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:11:59.905   23:45:30	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:59.905   23:45:30	-- common/autotest_common.sh@1290 -- # cat
00:11:59.905   23:45:30	-- common/autotest_common.sh@1302 -- # '[' trim == verify ']'
00:11:59.905   23:45:30	-- common/autotest_common.sh@1317 -- # '[' trim == trim ']'
00:11:59.905   23:45:30	-- common/autotest_common.sh@1318 -- # echo rw=trimwrite
00:11:59.905    23:45:30	-- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:11:59.906    23:45:30	-- bdev/blockdev.sh@353 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2103d49a-c133-48df-9927-a26439e83cf7"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2103d49a-c133-48df-9927-a26439e83cf7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "43bdcf1a-4b28-527e-96aa-622f4390a1b6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "43bdcf1a-4b28-527e-96aa-622f4390a1b6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "7849d386-9c33-5b81-a599-2364dc3380e5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "7849d386-9c33-5b81-a599-2364dc3380e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "8553e949-4664-5ea1-8ecb-e1159d0504e3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8553e949-4664-5ea1-8ecb-e1159d0504e3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "b99887f7-b516-5155-9793-7447850cdb38"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "b99887f7-b516-5155-9793-7447850cdb38",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "57594838-a9cc-57e1-a7e1-75bd643949ad"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "57594838-a9cc-57e1-a7e1-75bd643949ad",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "7cd84e69-a19b-5acb-a3b8-750ffc377d72"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7cd84e69-a19b-5acb-a3b8-750ffc377d72",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "cb3ef9e4-6030-5b89-98da-b5df34f46bcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "cb3ef9e4-6030-5b89-98da-b5df34f46bcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "8e01874d-39b3-5bcc-8ac8-f47480eb02a3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8e01874d-39b3-5bcc-8ac8-f47480eb02a3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "70af9b17-3329-5a87-b871-ef5374ad727e"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "70af9b17-3329-5a87-b871-ef5374ad727e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "078b9ae9-25bc-4c50-a6e9-aef372f163a4"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "056cf3ec-d5f8-4d5e-8868-f89dd67f975b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "ef725040-b135-450c-8024-8ef9f3b90402",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "32837d76-b713-4479-b4a9-d266f7bf9ac3"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "62dd1351-0b53-48f6-9f45-dc1ba97385c0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2ff40c4f-6ba5-441c-815f-a0963093863d",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "5445c4ec-6114-4ead-88ba-65efabc64926"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "ec23fdf9-72e1-470f-bff4-b636d83bbf36",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "7a789f28-1cef-4d89-9beb-03af6c45a26c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "caac536e-e6a9-4d5e-b3f9-5637de4956ec"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "caac536e-e6a9-4d5e-b3f9-5637de4956ec",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:11:59.906   23:45:30	-- bdev/blockdev.sh@353 -- # [[ -n Malloc0
00:11:59.906  Malloc1p0
00:11:59.906  Malloc1p1
00:11:59.907  Malloc2p0
00:11:59.907  Malloc2p1
00:11:59.907  Malloc2p2
00:11:59.907  Malloc2p3
00:11:59.907  Malloc2p4
00:11:59.907  Malloc2p5
00:11:59.907  Malloc2p6
00:11:59.907  Malloc2p7
00:11:59.907  TestPT
00:11:59.907  raid0
00:11:59.907  concat0 ]]
00:11:59.907    23:45:30	-- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:11:59.908    23:45:30	-- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2103d49a-c133-48df-9927-a26439e83cf7"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2103d49a-c133-48df-9927-a26439e83cf7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "43bdcf1a-4b28-527e-96aa-622f4390a1b6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "43bdcf1a-4b28-527e-96aa-622f4390a1b6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "7849d386-9c33-5b81-a599-2364dc3380e5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "7849d386-9c33-5b81-a599-2364dc3380e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "8553e949-4664-5ea1-8ecb-e1159d0504e3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8553e949-4664-5ea1-8ecb-e1159d0504e3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "b99887f7-b516-5155-9793-7447850cdb38"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "b99887f7-b516-5155-9793-7447850cdb38",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "57594838-a9cc-57e1-a7e1-75bd643949ad"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "57594838-a9cc-57e1-a7e1-75bd643949ad",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "7cd84e69-a19b-5acb-a3b8-750ffc377d72"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7cd84e69-a19b-5acb-a3b8-750ffc377d72",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "cb3ef9e4-6030-5b89-98da-b5df34f46bcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "cb3ef9e4-6030-5b89-98da-b5df34f46bcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "39ea08bb-d8d3-51b7-91dd-72dcd0e30b88",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "8e01874d-39b3-5bcc-8ac8-f47480eb02a3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "8e01874d-39b3-5bcc-8ac8-f47480eb02a3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "96de4e4f-e3d2-59c5-b7a8-cd0b9f443ec5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "70af9b17-3329-5a87-b871-ef5374ad727e"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "70af9b17-3329-5a87-b871-ef5374ad727e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "078b9ae9-25bc-4c50-a6e9-aef372f163a4"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "078b9ae9-25bc-4c50-a6e9-aef372f163a4",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "056cf3ec-d5f8-4d5e-8868-f89dd67f975b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "ef725040-b135-450c-8024-8ef9f3b90402",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "32837d76-b713-4479-b4a9-d266f7bf9ac3"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "32837d76-b713-4479-b4a9-d266f7bf9ac3",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "62dd1351-0b53-48f6-9f45-dc1ba97385c0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2ff40c4f-6ba5-441c-815f-a0963093863d",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "5445c4ec-6114-4ead-88ba-65efabc64926"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5445c4ec-6114-4ead-88ba-65efabc64926",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "ec23fdf9-72e1-470f-bff4-b636d83bbf36",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "7a789f28-1cef-4d89-9beb-03af6c45a26c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "caac536e-e6a9-4d5e-b3f9-5637de4956ec"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "caac536e-e6a9-4d5e-b3f9-5637de4956ec",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc0
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_TestPT]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=TestPT
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_raid0]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=raid0
00:11:59.908   23:45:30	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:59.908   23:45:30	-- bdev/blockdev.sh@355 -- # echo '[job_concat0]'
00:11:59.908   23:45:30	-- bdev/blockdev.sh@356 -- # echo filename=concat0
00:11:59.908   23:45:30	-- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:59.908   23:45:30	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:11:59.908   23:45:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:59.908   23:45:30	-- common/autotest_common.sh@10 -- # set +x
00:11:59.908  ************************************
00:11:59.908  START TEST bdev_fio_trim
00:11:59.908  ************************************
00:11:59.908   23:45:30	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:59.908   23:45:30	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:59.908   23:45:30	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:11:59.908   23:45:30	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:59.908   23:45:30	-- common/autotest_common.sh@1328 -- # local sanitizers
00:11:59.908   23:45:30	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:59.908   23:45:30	-- common/autotest_common.sh@1330 -- # shift
00:11:59.908   23:45:30	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:11:59.908   23:45:30	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:11:59.908    23:45:30	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:59.908    23:45:30	-- common/autotest_common.sh@1334 -- # grep libasan
00:11:59.908    23:45:30	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:11:59.908   23:45:30	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:11:59.908   23:45:30	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:11:59.908   23:45:30	-- common/autotest_common.sh@1336 -- # break
00:11:59.908   23:45:30	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:11:59.908   23:45:30	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:00.167  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.167  fio-3.35
00:12:00.167  Starting 14 threads
00:12:12.367  
00:12:12.367  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=110152: Fri Dec 13 23:45:42 2024
00:12:12.367    write: IOPS=140k, BW=546MiB/s (572MB/s)(5456MiB/10001msec); 0 zone resets
00:12:12.367      slat (usec): min=2, max=28032, avg=36.52, stdev=403.82
00:12:12.367      clat (usec): min=25, max=28334, avg=250.83, stdev=1069.16
00:12:12.367       lat (usec): min=36, max=28352, avg=287.35, stdev=1142.37
00:12:12.367      clat percentiles (usec):
00:12:12.367       | 50.000th=[  167], 99.000th=[  449], 99.900th=[16319], 99.990th=[20317],
00:12:12.367       | 99.999th=[24249]
00:12:12.367     bw (  KiB/s): min=361090, max=870464, per=99.64%, avg=556690.32, stdev=11625.17, samples=266
00:12:12.367     iops        : min=90272, max=217616, avg=139172.47, stdev=2906.29, samples=266
00:12:12.367    trim: IOPS=140k, BW=546MiB/s (572MB/s)(5456MiB/10001msec); 0 zone resets
00:12:12.367      slat (usec): min=4, max=28091, avg=24.86, stdev=330.37
00:12:12.367      clat (usec): min=4, max=28265, avg=273.17, stdev=1103.69
00:12:12.367       lat (usec): min=10, max=28333, avg=298.03, stdev=1151.79
00:12:12.367      clat percentiles (usec):
00:12:12.367       | 50.000th=[  188], 99.000th=[  416], 99.900th=[16319], 99.990th=[20317],
00:12:12.367       | 99.999th=[24249]
00:12:12.367     bw (  KiB/s): min=361098, max=870408, per=99.64%, avg=556693.68, stdev=11624.74, samples=266
00:12:12.367     iops        : min=90274, max=217602, avg=139173.32, stdev=2906.18, samples=266
00:12:12.367    lat (usec)   : 10=0.07%, 20=0.19%, 50=0.96%, 100=10.01%, 250=69.61%
00:12:12.367    lat (usec)   : 500=18.52%, 750=0.10%, 1000=0.01%
00:12:12.367    lat (msec)   : 2=0.01%, 4=0.02%, 10=0.03%, 20=0.46%, 50=0.02%
00:12:12.367    cpu          : usr=69.12%, sys=0.46%, ctx=170449, majf=0, minf=786
00:12:12.367    IO depths    : 1=12.3%, 2=24.7%, 4=50.1%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0%
00:12:12.367       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:12.367       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:12.367       issued rwts: total=0,1396835,1396841,0 short=0,0,0,0 dropped=0,0,0,0
00:12:12.367       latency   : target=0, window=0, percentile=100.00%, depth=8
00:12:12.367  
00:12:12.367  Run status group 0 (all jobs):
00:12:12.367    WRITE: bw=546MiB/s (572MB/s), 546MiB/s-546MiB/s (572MB/s-572MB/s), io=5456MiB (5721MB), run=10001-10001msec
00:12:12.367     TRIM: bw=546MiB/s (572MB/s), 546MiB/s-546MiB/s (572MB/s-572MB/s), io=5456MiB (5721MB), run=10001-10001msec
00:12:13.742  -----------------------------------------------------
00:12:13.742  Suppressions used:
00:12:13.742    count      bytes template
00:12:13.742       14        129 /usr/src/fio/parse.c
00:12:13.742        1        904 libcrypto.so
00:12:13.742  -----------------------------------------------------
00:12:13.742  
00:12:13.742  ************************************
00:12:13.742  END TEST bdev_fio_trim
00:12:13.742  ************************************
00:12:13.742  
00:12:13.742  real	0m13.498s
00:12:13.742  user	1m41.564s
00:12:13.742  sys	0m1.493s
00:12:13.742   23:45:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:13.742   23:45:44	-- common/autotest_common.sh@10 -- # set +x
00:12:13.742   23:45:44	-- bdev/blockdev.sh@366 -- # rm -f
00:12:13.742   23:45:44	-- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:13.742  /home/vagrant/spdk_repo/spdk
00:12:13.742   23:45:44	-- bdev/blockdev.sh@368 -- # popd
00:12:13.742   23:45:44	-- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT
00:12:13.742  
00:12:13.742  real	0m27.754s
00:12:13.742  user	3m17.584s
00:12:13.742  sys	0m5.695s
00:12:13.742  ************************************
00:12:13.742  END TEST bdev_fio
00:12:13.742  ************************************
00:12:13.742   23:45:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:13.742   23:45:44	-- common/autotest_common.sh@10 -- # set +x
00:12:13.742   23:45:44	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:12:13.742   23:45:44	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:13.742   23:45:44	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:12:13.742   23:45:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:13.742   23:45:44	-- common/autotest_common.sh@10 -- # set +x
00:12:13.742  ************************************
00:12:13.742  START TEST bdev_verify
00:12:13.742  ************************************
00:12:13.742   23:45:44	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:13.742  [2024-12-13 23:45:44.288873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:13.742  [2024-12-13 23:45:44.289286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110339 ]
00:12:13.742  [2024-12-13 23:45:44.466839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:14.000  [2024-12-13 23:45:44.706251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:14.000  [2024-12-13 23:45:44.706258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:14.567  [2024-12-13 23:45:45.073321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:14.567  [2024-12-13 23:45:45.073768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:14.567  [2024-12-13 23:45:45.081287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:14.567  [2024-12-13 23:45:45.081513] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:14.567  [2024-12-13 23:45:45.089328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:14.567  [2024-12-13 23:45:45.089528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:14.567  [2024-12-13 23:45:45.089706] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:14.567  [2024-12-13 23:45:45.272165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:14.567  [2024-12-13 23:45:45.272608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:14.567  [2024-12-13 23:45:45.272710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:12:14.567  [2024-12-13 23:45:45.272979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:14.568  [2024-12-13 23:45:45.275637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:14.568  [2024-12-13 23:45:45.275835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:15.135  Running I/O for 5 seconds...
00:12:20.401  
00:12:20.401                                                                                                  Latency(us)
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x1000
00:12:20.401  	 Malloc0             :       5.17    1703.89       6.66       0.00     0.00   74654.93    2025.66  219247.71
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x1000 length 0x1000
00:12:20.401  	 Malloc0             :       5.17    1678.81       6.56       0.00     0.00   75763.37    1593.72  287881.77
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x800
00:12:20.401  	 Malloc1p0           :       5.17    1179.92       4.61       0.00     0.00  107681.87    3932.16  130595.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x800 length 0x800
00:12:20.401  	 Malloc1p0           :       5.17    1180.62       4.61       0.00     0.00  107673.47    3902.37  130595.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x800
00:12:20.401  	 Malloc1p1           :       5.17    1179.40       4.61       0.00     0.00  107564.40    3678.95  126782.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x800 length 0x800
00:12:20.401  	 Malloc1p1           :       5.17    1180.36       4.61       0.00     0.00  107542.17    3664.06  126782.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p0           :       5.18    1178.85       4.60       0.00     0.00  107437.13    3842.79  122969.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p0           :       5.17    1179.87       4.61       0.00     0.00  107409.86    3813.00  122969.37
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p1           :       5.18    1178.32       4.60       0.00     0.00  107297.12    3991.74  118203.11
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p1           :       5.17    1179.35       4.61       0.00     0.00  107285.23    4021.53  118679.74
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p2           :       5.18    1177.77       4.60       0.00     0.00  107187.58    3619.37  115343.36
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p2           :       5.18    1178.82       4.60       0.00     0.00  107151.36    3634.27  115343.36
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p3           :       5.18    1177.21       4.60       0.00     0.00  107079.79    3470.43  112483.61
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p3           :       5.18    1178.28       4.60       0.00     0.00  107048.68    3589.59  112006.98
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p4           :       5.19    1176.64       4.60       0.00     0.00  106973.02    3604.48  109147.23
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p4           :       5.18    1177.74       4.60       0.00     0.00  106921.32    3604.48  109147.23
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p5           :       5.19    1176.10       4.59       0.00     0.00  106858.57    3515.11  106764.10
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p5           :       5.18    1177.18       4.60       0.00     0.00  106828.04    3634.27  106764.10
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p6           :       5.19    1175.54       4.59       0.00     0.00  106721.64    3589.59  103427.72
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p6           :       5.19    1176.61       4.60       0.00     0.00  106707.76    3634.27  103427.72
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x200
00:12:20.401  	 Malloc2p7           :       5.19    1175.00       4.59       0.00     0.00  106619.93    3678.95  100567.97
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x200 length 0x200
00:12:20.401  	 Malloc2p7           :       5.19    1176.06       4.59       0.00     0.00  106603.65    3664.06  100091.35
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x0 length 0x1000
00:12:20.401  	 TestPT              :       5.20    1162.51       4.54       0.00     0.00  107593.95    6851.49   99614.72
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.401  	 Verification LBA range: start 0x1000 length 0x1000
00:12:20.401  	 TestPT              :       5.19    1162.79       4.54       0.00     0.00  107622.78    7685.59  100567.97
00:12:20.401  
[2024-12-13T23:45:51.133Z]  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x0 length 0x2000
00:12:20.402  	 raid0               :       5.20    1173.87       4.59       0.00     0.00  106311.92    3798.11   89605.59
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x2000 length 0x2000
00:12:20.402  	 raid0               :       5.19    1174.95       4.59       0.00     0.00  106295.84    3723.64   91035.46
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x0 length 0x2000
00:12:20.402  	 concat0             :       5.20    1173.36       4.58       0.00     0.00  106183.53    3738.53   86269.21
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x2000 length 0x2000
00:12:20.402  	 concat0             :       5.20    1174.41       4.59       0.00     0.00  106164.62    3723.64   87699.08
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x0 length 0x1000
00:12:20.402  	 raid1               :       5.20    1188.00       4.64       0.00     0.00  105244.36    1258.59   82456.20
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x1000 length 0x1000
00:12:20.402  	 raid1               :       5.20    1173.84       4.59       0.00     0.00  106046.05    4081.11   83886.08
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x0 length 0x4e2
00:12:20.402  	 AIO0                :       5.20    1186.83       4.64       0.00     0.00  105081.20    1280.93   81979.58
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:20.402  	 Verification LBA range: start 0x4e2 length 0x4e2
00:12:20.402  	 AIO0                :       5.20    1188.09       4.64       0.00     0.00  104974.22     484.07   83409.45
00:12:20.402  
[2024-12-13T23:45:51.134Z]  ===================================================================================================================
00:12:20.402  
[2024-12-13T23:45:51.134Z]  Total                       :              38700.99     151.18       0.00     0.00  104047.93     484.07  287881.77
00:12:22.307  
00:12:22.307  real	0m8.612s
00:12:22.307  user	0m14.416s
00:12:22.307  sys	0m0.591s
00:12:22.307   23:45:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:22.307   23:45:52	-- common/autotest_common.sh@10 -- # set +x
00:12:22.307  ************************************
00:12:22.307  END TEST bdev_verify
00:12:22.307  ************************************
00:12:22.307   23:45:52	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:22.307   23:45:52	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:12:22.307   23:45:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:22.307   23:45:52	-- common/autotest_common.sh@10 -- # set +x
00:12:22.307  ************************************
00:12:22.307  START TEST bdev_verify_big_io
00:12:22.307  ************************************
00:12:22.307   23:45:52	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:22.307  [2024-12-13 23:45:52.939242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:22.307  [2024-12-13 23:45:52.939670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110464 ]
00:12:22.566  [2024-12-13 23:45:53.113960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:22.825  [2024-12-13 23:45:53.312621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:22.825  [2024-12-13 23:45:53.312631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.083  [2024-12-13 23:45:53.674938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:23.084  [2024-12-13 23:45:53.675390] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:23.084  [2024-12-13 23:45:53.682902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:23.084  [2024-12-13 23:45:53.683161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:23.084  [2024-12-13 23:45:53.690943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:23.084  [2024-12-13 23:45:53.691173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:23.084  [2024-12-13 23:45:53.691320] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:23.342  [2024-12-13 23:45:53.876263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:23.342  [2024-12-13 23:45:53.876731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:23.342  [2024-12-13 23:45:53.876828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:12:23.342  [2024-12-13 23:45:53.877088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:23.342  [2024-12-13 23:45:53.879698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:23.342  [2024-12-13 23:45:53.879894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:23.601  [2024-12-13 23:45:54.223146] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.226619] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.230619] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.234526] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.237704] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.241635] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.244850] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.248737] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.252094] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.256089] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.259403] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.263344] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:23.601  [2024-12-13 23:45:54.266574] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:23.602  [2024-12-13 23:45:54.270457] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:23.602  [2024-12-13 23:45:54.274428] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:23.602  [2024-12-13 23:45:54.277598] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:23.861  [2024-12-13 23:45:54.356923] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:23.861  [2024-12-13 23:45:54.364028] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:23.861  Running I/O for 5 seconds...
00:12:30.455  
00:12:30.455                                                                                                  Latency(us)
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x100
00:12:30.455  	 Malloc0             :       5.54     384.31      24.02       0.00     0.00  323861.14   18469.24 1067641.02
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x100 length 0x100
00:12:30.455  	 Malloc0             :       5.55     356.66      22.29       0.00     0.00  349589.61   13702.98 1296421.24
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x80
00:12:30.455  	 Malloc1p0           :       5.61     223.73      13.98       0.00     0.00  548662.41   35985.22  976128.93
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x80 length 0x80
00:12:30.455  	 Malloc1p0           :       5.55     290.80      18.18       0.00     0.00  424843.86   37891.72  876990.84
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x80
00:12:30.455  	 Malloc1p1           :       5.76     131.54       8.22       0.00     0.00  911416.34   40751.48 1883623.80
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x80 length 0x80
00:12:30.455  	 Malloc1p1           :       5.78     131.23       8.20       0.00     0.00  913573.41   41466.41 1952257.86
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p0           :       5.61      76.51       4.78       0.00     0.00  394939.95    6881.28  606267.58
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p0           :       5.56      73.44       4.59       0.00     0.00  408159.46    6970.65  606267.58
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p1           :       5.61      76.49       4.78       0.00     0.00  393540.21    6851.49  591015.56
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p1           :       5.56      73.42       4.59       0.00     0.00  406508.72    6881.28  591015.56
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p2           :       5.61      76.47       4.78       0.00     0.00  391949.65    6940.86  575763.55
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p2           :       5.56      73.40       4.59       0.00     0.00  404948.48    6642.97  579576.55
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p3           :       5.61      76.45       4.78       0.00     0.00  390523.25    6911.07  564324.54
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p3           :       5.61      76.42       4.78       0.00     0.00  391019.45    6791.91  568137.54
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p4           :       5.61      76.44       4.78       0.00     0.00  389268.36    6851.49  552885.53
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p4           :       5.62      76.39       4.77       0.00     0.00  389588.74    6762.12  552885.53
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p5           :       5.61      76.42       4.78       0.00     0.00  387709.67    6791.91  541446.52
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p5           :       5.62      76.36       4.77       0.00     0.00  388139.15    7179.17  537633.51
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p6           :       5.62      76.39       4.77       0.00     0.00  386327.01    6970.65  526194.50
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p6           :       5.62      76.32       4.77       0.00     0.00  386766.35    7030.23  526194.50
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x20
00:12:30.455  	 Malloc2p7           :       5.62      76.36       4.77       0.00     0.00  384748.19    7060.01  510942.49
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x20 length 0x20
00:12:30.455  	 Malloc2p7           :       5.62      76.31       4.77       0.00     0.00  385333.63    7238.75  510942.49
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x100
00:12:30.455  	 TestPT              :       5.79     131.65       8.23       0.00     0.00  876142.04   53620.36 1914127.83
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x100 length 0x100
00:12:30.455  	 TestPT              :       5.80     124.86       7.80       0.00     0.00  923627.06   74830.20 1967509.88
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x200
00:12:30.455  	 raid0               :       5.74     144.43       9.03       0.00     0.00  798443.65   40989.79 1868371.78
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x200 length 0x200
00:12:30.455  	 raid0               :       5.80     137.25       8.58       0.00     0.00  831984.46   40513.16 1937005.85
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x200
00:12:30.455  	 concat0             :       5.80     147.99       9.25       0.00     0.00  767825.97   36938.47 1868371.78
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x200 length 0x200
00:12:30.455  	 concat0             :       5.81     147.76       9.23       0.00     0.00  767964.57   35746.91 1929379.84
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x0 length 0x100
00:12:30.455  	 raid1               :       5.79     153.43       9.59       0.00     0.00  728166.75   25141.99 1883623.80
00:12:30.455  
[2024-12-13T23:46:01.187Z]  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:30.455  	 Verification LBA range: start 0x100 length 0x100
00:12:30.456  	 raid1               :       5.80     156.86       9.80       0.00     0.00  709758.87   23116.33 1921753.83
00:12:30.456  
[2024-12-13T23:46:01.188Z]  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:12:30.456  	 Verification LBA range: start 0x0 length 0x4e
00:12:30.456  	 AIO0                :       5.80     159.61       9.98       0.00     0.00  422459.56    2681.02 1090519.04
00:12:30.456  
[2024-12-13T23:46:01.188Z]  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:12:30.456  	 Verification LBA range: start 0x4e length 0x4e
00:12:30.456  	 AIO0                :       5.81     162.78      10.17       0.00     0.00  413983.79    1496.90 1098145.05
00:12:30.456  
[2024-12-13T23:46:01.188Z]  ===================================================================================================================
00:12:30.456  
[2024-12-13T23:46:01.188Z]  Total                       :               4198.46     262.40       0.00     0.00  540586.25    1496.90 1967509.88
00:12:31.832  ************************************
00:12:31.832  END TEST bdev_verify_big_io
00:12:31.832  ************************************
00:12:31.832  
00:12:31.832  real	0m9.499s
00:12:31.832  user	0m17.192s
00:12:31.832  sys	0m0.606s
00:12:31.832   23:46:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:31.832   23:46:02	-- common/autotest_common.sh@10 -- # set +x
00:12:31.832   23:46:02	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:31.832   23:46:02	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:31.832   23:46:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:31.832   23:46:02	-- common/autotest_common.sh@10 -- # set +x
00:12:31.832  ************************************
00:12:31.832  START TEST bdev_write_zeroes
00:12:31.832  ************************************
00:12:31.832   23:46:02	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:31.832  [2024-12-13 23:46:02.488488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:31.832  [2024-12-13 23:46:02.489004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110608 ]
00:12:32.091  [2024-12-13 23:46:02.657900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:32.349  [2024-12-13 23:46:02.839291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:32.608  [2024-12-13 23:46:03.198774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:32.608  [2024-12-13 23:46:03.199058] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:32.608  [2024-12-13 23:46:03.206730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:32.608  [2024-12-13 23:46:03.206949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:32.608  [2024-12-13 23:46:03.214749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:32.608  [2024-12-13 23:46:03.214935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:32.608  [2024-12-13 23:46:03.215098] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:32.867  [2024-12-13 23:46:03.405787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:32.867  [2024-12-13 23:46:03.406127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:32.867  [2024-12-13 23:46:03.406228] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:12:32.867  [2024-12-13 23:46:03.406409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:32.867  [2024-12-13 23:46:03.408848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:32.867  [2024-12-13 23:46:03.409067] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:33.126  Running I/O for 1 seconds...
00:12:34.502  
00:12:34.502                                                                                                  Latency(us)
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc0             :       1.03    6239.66      24.37       0.00     0.00   20504.33     670.25   36223.53
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc1p0           :       1.03    6232.81      24.35       0.00     0.00   20494.44     848.99   35508.60
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc1p1           :       1.03    6226.23      24.32       0.00     0.00   20476.32     848.99   34555.35
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p0           :       1.03    6219.93      24.30       0.00     0.00   20455.77     848.99   33602.09
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p1           :       1.03    6213.52      24.27       0.00     0.00   20441.83     841.54   32887.16
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p2           :       1.03    6207.11      24.25       0.00     0.00   20423.05     852.71   31933.91
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p3           :       1.05    6237.97      24.37       0.00     0.00   20288.80     845.27   31218.97
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p4           :       1.05    6231.02      24.34       0.00     0.00   20275.79     863.88   30265.72
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p5           :       1.05    6223.54      24.31       0.00     0.00   20257.70     848.99   29431.62
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p6           :       1.05    6216.13      24.28       0.00     0.00   20249.26     863.88   28478.37
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 Malloc2p7           :       1.05    6209.27      24.25       0.00     0.00   20236.11     860.16   27644.28
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 TestPT              :       1.05    6202.52      24.23       0.00     0.00   20220.15     901.12   26691.03
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 raid0               :       1.05    6193.69      24.19       0.00     0.00   20202.66    1414.98   25380.31
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 concat0             :       1.06    6186.05      24.16       0.00     0.00   20163.00    1392.64   23950.43
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 raid1               :       1.06    6176.74      24.13       0.00     0.00   20120.98    2189.50   22878.02
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:34.502  	 AIO0                :       1.06    6150.08      24.02       0.00     0.00   20125.24    1414.98   22878.02
00:12:34.502  
[2024-12-13T23:46:05.234Z]  ===================================================================================================================
00:12:34.502  
[2024-12-13T23:46:05.234Z]  Total                       :              99366.29     388.15       0.00     0.00   20307.33     670.25   36223.53
00:12:36.404  
00:12:36.404  real	0m4.261s
00:12:36.404  user	0m3.586s
00:12:36.404  sys	0m0.477s
00:12:36.404  ************************************
00:12:36.404  END TEST bdev_write_zeroes
00:12:36.404  ************************************
00:12:36.404   23:46:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:36.404   23:46:06	-- common/autotest_common.sh@10 -- # set +x
00:12:36.404   23:46:06	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:36.404   23:46:06	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:36.404   23:46:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:36.404   23:46:06	-- common/autotest_common.sh@10 -- # set +x
00:12:36.404  ************************************
00:12:36.404  START TEST bdev_json_nonenclosed
00:12:36.404  ************************************
00:12:36.404   23:46:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:36.404  [2024-12-13 23:46:06.808975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:36.404  [2024-12-13 23:46:06.809896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110685 ]
00:12:36.404  [2024-12-13 23:46:06.985060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:36.663  [2024-12-13 23:46:07.180096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:36.663  [2024-12-13 23:46:07.180364] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:12:36.663  [2024-12-13 23:46:07.180406] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:36.922  ************************************
00:12:36.922  END TEST bdev_json_nonenclosed
00:12:36.922  ************************************
00:12:36.922  
00:12:36.922  real	0m0.780s
00:12:36.922  user	0m0.532s
00:12:36.922  sys	0m0.148s
00:12:36.922   23:46:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:36.922   23:46:07	-- common/autotest_common.sh@10 -- # set +x
00:12:36.922   23:46:07	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:36.922   23:46:07	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:36.922   23:46:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:36.922   23:46:07	-- common/autotest_common.sh@10 -- # set +x
00:12:36.922  ************************************
00:12:36.922  START TEST bdev_json_nonarray
00:12:36.922  ************************************
00:12:36.922   23:46:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:36.922  [2024-12-13 23:46:07.650217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:36.922  [2024-12-13 23:46:07.650438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110717 ]
00:12:37.181  [2024-12-13 23:46:07.826257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:37.440  [2024-12-13 23:46:08.029218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:37.440  [2024-12-13 23:46:08.029478] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:12:37.440  [2024-12-13 23:46:08.029524] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:37.699  ************************************
00:12:37.699  END TEST bdev_json_nonarray
00:12:37.699  ************************************
00:12:37.699  
00:12:37.699  real	0m0.799s
00:12:37.699  user	0m0.527s
00:12:37.699  sys	0m0.172s
00:12:37.699   23:46:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:37.699   23:46:08	-- common/autotest_common.sh@10 -- # set +x
00:12:37.699   23:46:08	-- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]]
00:12:37.699   23:46:08	-- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite ''
00:12:37.699   23:46:08	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:37.699   23:46:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:37.699   23:46:08	-- common/autotest_common.sh@10 -- # set +x
00:12:37.958  ************************************
00:12:37.958  START TEST bdev_qos
00:12:37.958  ************************************
00:12:37.958   23:46:08	-- common/autotest_common.sh@1114 -- # qos_test_suite ''
00:12:37.958   23:46:08	-- bdev/blockdev.sh@444 -- # QOS_PID=110755
00:12:37.958   23:46:08	-- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:12:37.958   23:46:08	-- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 110755'
00:12:37.958  Process qos testing pid: 110755
00:12:37.958   23:46:08	-- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:12:37.958   23:46:08	-- bdev/blockdev.sh@447 -- # waitforlisten 110755
00:12:37.958   23:46:08	-- common/autotest_common.sh@829 -- # '[' -z 110755 ']'
00:12:37.958   23:46:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:37.958   23:46:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:37.958   23:46:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:37.958  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:37.958   23:46:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:37.958   23:46:08	-- common/autotest_common.sh@10 -- # set +x
00:12:37.958  [2024-12-13 23:46:08.491520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:37.958  [2024-12-13 23:46:08.491722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110755 ]
00:12:37.958  [2024-12-13 23:46:08.648724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:38.217  [2024-12-13 23:46:08.884609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:38.785   23:46:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:38.785   23:46:09	-- common/autotest_common.sh@862 -- # return 0
00:12:38.785   23:46:09	-- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:12:38.785   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:38.785   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044  Malloc_0
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0
00:12:39.044   23:46:09	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0
00:12:39.044   23:46:09	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:12:39.044   23:46:09	-- common/autotest_common.sh@899 -- # local i
00:12:39.044   23:46:09	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:12:39.044   23:46:09	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:12:39.044   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.044   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:12:39.044   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.044   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044  [
00:12:39.044  {
00:12:39.044  "name": "Malloc_0",
00:12:39.044  "aliases": [
00:12:39.044  "74c56630-766c-4568-8d39-0689f43cbe97"
00:12:39.044  ],
00:12:39.044  "product_name": "Malloc disk",
00:12:39.044  "block_size": 512,
00:12:39.044  "num_blocks": 262144,
00:12:39.044  "uuid": "74c56630-766c-4568-8d39-0689f43cbe97",
00:12:39.044  "assigned_rate_limits": {
00:12:39.044  "rw_ios_per_sec": 0,
00:12:39.044  "rw_mbytes_per_sec": 0,
00:12:39.044  "r_mbytes_per_sec": 0,
00:12:39.044  "w_mbytes_per_sec": 0
00:12:39.044  },
00:12:39.044  "claimed": false,
00:12:39.044  "zoned": false,
00:12:39.044  "supported_io_types": {
00:12:39.044  "read": true,
00:12:39.044  "write": true,
00:12:39.044  "unmap": true,
00:12:39.044  "write_zeroes": true,
00:12:39.044  "flush": true,
00:12:39.044  "reset": true,
00:12:39.044  "compare": false,
00:12:39.044  "compare_and_write": false,
00:12:39.044  "abort": true,
00:12:39.044  "nvme_admin": false,
00:12:39.044  "nvme_io": false
00:12:39.044  },
00:12:39.044  "memory_domains": [
00:12:39.044  {
00:12:39.044  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:39.044  "dma_device_type": 2
00:12:39.044  }
00:12:39.044  ],
00:12:39.044  "driver_specific": {}
00:12:39.044  }
00:12:39.044  ]
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@905 -- # return 0
00:12:39.044   23:46:09	-- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512
00:12:39.044   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.044   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044  Null_1
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- bdev/blockdev.sh@452 -- # waitforbdev Null_1
00:12:39.044   23:46:09	-- common/autotest_common.sh@897 -- # local bdev_name=Null_1
00:12:39.044   23:46:09	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:12:39.044   23:46:09	-- common/autotest_common.sh@899 -- # local i
00:12:39.044   23:46:09	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:12:39.044   23:46:09	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:12:39.044   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.044   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:12:39.044   23:46:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:39.044   23:46:09	-- common/autotest_common.sh@10 -- # set +x
00:12:39.044  [
00:12:39.044  {
00:12:39.044  "name": "Null_1",
00:12:39.044  "aliases": [
00:12:39.044  "5ce00eea-771d-47f2-915e-228ccd9e4b22"
00:12:39.044  ],
00:12:39.044  "product_name": "Null disk",
00:12:39.044  "block_size": 512,
00:12:39.044  "num_blocks": 262144,
00:12:39.044  "uuid": "5ce00eea-771d-47f2-915e-228ccd9e4b22",
00:12:39.044  "assigned_rate_limits": {
00:12:39.044  "rw_ios_per_sec": 0,
00:12:39.044  "rw_mbytes_per_sec": 0,
00:12:39.044  "r_mbytes_per_sec": 0,
00:12:39.044  "w_mbytes_per_sec": 0
00:12:39.044  },
00:12:39.044  "claimed": false,
00:12:39.044  "zoned": false,
00:12:39.044  "supported_io_types": {
00:12:39.044  "read": true,
00:12:39.044  "write": true,
00:12:39.044  "unmap": false,
00:12:39.044  "write_zeroes": true,
00:12:39.044  "flush": false,
00:12:39.044  "reset": true,
00:12:39.044  "compare": false,
00:12:39.044  "compare_and_write": false,
00:12:39.044  "abort": true,
00:12:39.044  "nvme_admin": false,
00:12:39.044  "nvme_io": false
00:12:39.044  },
00:12:39.044  "driver_specific": {}
00:12:39.044  }
00:12:39.044  ]
00:12:39.044   23:46:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:39.044   23:46:09	-- common/autotest_common.sh@905 -- # return 0
00:12:39.044   23:46:09	-- bdev/blockdev.sh@455 -- # qos_function_test
00:12:39.045   23:46:09	-- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:39.045   23:46:09	-- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000
00:12:39.045   23:46:09	-- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2
00:12:39.045   23:46:09	-- bdev/blockdev.sh@410 -- # local io_result=0
00:12:39.045   23:46:09	-- bdev/blockdev.sh@411 -- # local iops_limit=0
00:12:39.045   23:46:09	-- bdev/blockdev.sh@412 -- # local bw_limit=0
00:12:39.045    23:46:09	-- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0
00:12:39.045    23:46:09	-- bdev/blockdev.sh@373 -- # local limit_type=IOPS
00:12:39.045    23:46:09	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:12:39.045    23:46:09	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:39.045     23:46:09	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:39.045     23:46:09	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:12:39.045     23:46:09	-- bdev/blockdev.sh@376 -- # tail -1
00:12:39.045  Running I/O for 60 seconds...
00:12:44.315    23:46:14	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  83198.17  332792.69  0.00       0.00       335872.00  0.00     0.00   '
00:12:44.315    23:46:14	-- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']'
00:12:44.315     23:46:14	-- bdev/blockdev.sh@378 -- # awk '{print $2}'
00:12:44.315    23:46:14	-- bdev/blockdev.sh@378 -- # iostat_result=83198.17
00:12:44.316    23:46:14	-- bdev/blockdev.sh@383 -- # echo 83198
00:12:44.316   23:46:14	-- bdev/blockdev.sh@414 -- # io_result=83198
00:12:44.316   23:46:14	-- bdev/blockdev.sh@416 -- # iops_limit=20000
00:12:44.316   23:46:14	-- bdev/blockdev.sh@417 -- # '[' 20000 -gt 1000 ']'
00:12:44.316   23:46:14	-- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0
00:12:44.316   23:46:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:44.316   23:46:14	-- common/autotest_common.sh@10 -- # set +x
00:12:44.316   23:46:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:44.316   23:46:14	-- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0
00:12:44.316   23:46:14	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:12:44.316   23:46:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:44.316   23:46:14	-- common/autotest_common.sh@10 -- # set +x
00:12:44.316  ************************************
00:12:44.316  START TEST bdev_qos_iops
00:12:44.316  ************************************
00:12:44.316   23:46:14	-- common/autotest_common.sh@1114 -- # run_qos_test 20000 IOPS Malloc_0
00:12:44.316   23:46:14	-- bdev/blockdev.sh@387 -- # local qos_limit=20000
00:12:44.316   23:46:14	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:12:44.316    23:46:14	-- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0
00:12:44.316    23:46:14	-- bdev/blockdev.sh@373 -- # local limit_type=IOPS
00:12:44.316    23:46:14	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:12:44.316    23:46:14	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:44.316     23:46:14	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:44.316     23:46:14	-- bdev/blockdev.sh@376 -- # tail -1
00:12:44.316     23:46:14	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:12:49.584    23:46:20	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  20023.31  80093.24   0.00       0.00       81520.00   0.00     0.00   '
00:12:49.584    23:46:20	-- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']'
00:12:49.584     23:46:20	-- bdev/blockdev.sh@378 -- # awk '{print $2}'
00:12:49.584    23:46:20	-- bdev/blockdev.sh@378 -- # iostat_result=20023.31
00:12:49.584    23:46:20	-- bdev/blockdev.sh@383 -- # echo 20023
00:12:49.584  ************************************
00:12:49.584  END TEST bdev_qos_iops
00:12:49.584  ************************************
00:12:49.584   23:46:20	-- bdev/blockdev.sh@390 -- # qos_result=20023
00:12:49.584   23:46:20	-- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']'
00:12:49.584   23:46:20	-- bdev/blockdev.sh@394 -- # lower_limit=18000
00:12:49.584   23:46:20	-- bdev/blockdev.sh@395 -- # upper_limit=22000
00:12:49.584   23:46:20	-- bdev/blockdev.sh@398 -- # '[' 20023 -lt 18000 ']'
00:12:49.584   23:46:20	-- bdev/blockdev.sh@398 -- # '[' 20023 -gt 22000 ']'
00:12:49.584  
00:12:49.584  real	0m5.222s
00:12:49.584  user	0m0.103s
00:12:49.584  sys	0m0.042s
00:12:49.584   23:46:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:49.584   23:46:20	-- common/autotest_common.sh@10 -- # set +x
00:12:49.584    23:46:20	-- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1
00:12:49.584    23:46:20	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:12:49.584    23:46:20	-- bdev/blockdev.sh@374 -- # local qos_dev=Null_1
00:12:49.584    23:46:20	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:49.584     23:46:20	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:49.584     23:46:20	-- bdev/blockdev.sh@376 -- # grep Null_1
00:12:49.584     23:46:20	-- bdev/blockdev.sh@376 -- # tail -1
00:12:54.851    23:46:25	-- bdev/blockdev.sh@376 -- # iostat_result='Null_1    31106.98  124427.91  0.00       0.00       125952.00  0.00     0.00   '
00:12:54.851    23:46:25	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:12:54.851    23:46:25	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:54.851     23:46:25	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:12:54.851    23:46:25	-- bdev/blockdev.sh@380 -- # iostat_result=125952.00
00:12:54.851    23:46:25	-- bdev/blockdev.sh@383 -- # echo 125952
00:12:54.851   23:46:25	-- bdev/blockdev.sh@425 -- # bw_limit=125952
00:12:54.851   23:46:25	-- bdev/blockdev.sh@426 -- # bw_limit=12
00:12:54.851   23:46:25	-- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']'
00:12:54.851   23:46:25	-- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1
00:12:54.851   23:46:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:54.851   23:46:25	-- common/autotest_common.sh@10 -- # set +x
00:12:54.851   23:46:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:54.851   23:46:25	-- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1
00:12:54.851   23:46:25	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:12:54.851   23:46:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:54.851   23:46:25	-- common/autotest_common.sh@10 -- # set +x
00:12:54.851  ************************************
00:12:54.851  START TEST bdev_qos_bw
00:12:54.851  ************************************
00:12:54.851   23:46:25	-- common/autotest_common.sh@1114 -- # run_qos_test 12 BANDWIDTH Null_1
00:12:54.851   23:46:25	-- bdev/blockdev.sh@387 -- # local qos_limit=12
00:12:54.851   23:46:25	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:12:54.851    23:46:25	-- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1
00:12:54.851    23:46:25	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:12:54.851    23:46:25	-- bdev/blockdev.sh@374 -- # local qos_dev=Null_1
00:12:54.851    23:46:25	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:54.851     23:46:25	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:54.851     23:46:25	-- bdev/blockdev.sh@376 -- # tail -1
00:12:54.851     23:46:25	-- bdev/blockdev.sh@376 -- # grep Null_1
00:13:00.118    23:46:30	-- bdev/blockdev.sh@376 -- # iostat_result='Null_1    3072.03   12288.11   0.00       0.00       12508.00  0.00     0.00   '
00:13:00.118    23:46:30	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:13:00.118    23:46:30	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:00.118     23:46:30	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:13:00.118    23:46:30	-- bdev/blockdev.sh@380 -- # iostat_result=12508.00
00:13:00.118    23:46:30	-- bdev/blockdev.sh@383 -- # echo 12508
00:13:00.118  ************************************
00:13:00.118  END TEST bdev_qos_bw
00:13:00.118  ************************************
00:13:00.118   23:46:30	-- bdev/blockdev.sh@390 -- # qos_result=12508
00:13:00.118   23:46:30	-- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:00.118   23:46:30	-- bdev/blockdev.sh@392 -- # qos_limit=12288
00:13:00.118   23:46:30	-- bdev/blockdev.sh@394 -- # lower_limit=11059
00:13:00.118   23:46:30	-- bdev/blockdev.sh@395 -- # upper_limit=13516
00:13:00.118   23:46:30	-- bdev/blockdev.sh@398 -- # '[' 12508 -lt 11059 ']'
00:13:00.118   23:46:30	-- bdev/blockdev.sh@398 -- # '[' 12508 -gt 13516 ']'
00:13:00.118  
00:13:00.118  real	0m5.230s
00:13:00.118  user	0m0.114s
00:13:00.118  sys	0m0.029s
00:13:00.118   23:46:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:00.118   23:46:30	-- common/autotest_common.sh@10 -- # set +x
00:13:00.118   23:46:30	-- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:13:00.118   23:46:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.118   23:46:30	-- common/autotest_common.sh@10 -- # set +x
00:13:00.118   23:46:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.118   23:46:30	-- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:13:00.118   23:46:30	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:00.118   23:46:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:00.118   23:46:30	-- common/autotest_common.sh@10 -- # set +x
00:13:00.118  ************************************
00:13:00.118  START TEST bdev_qos_ro_bw
00:13:00.118  ************************************
00:13:00.118   23:46:30	-- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:13:00.118   23:46:30	-- bdev/blockdev.sh@387 -- # local qos_limit=2
00:13:00.118   23:46:30	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:13:00.118    23:46:30	-- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0
00:13:00.118    23:46:30	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:13:00.118    23:46:30	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:13:00.118    23:46:30	-- bdev/blockdev.sh@375 -- # local iostat_result
00:13:00.118     23:46:30	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:13:00.118     23:46:30	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:00.118     23:46:30	-- bdev/blockdev.sh@376 -- # tail -1
00:13:05.383    23:46:35	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  512.00   2048.01    0.00       0.00       2068.00   0.00     0.00   '
00:13:05.383    23:46:35	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:13:05.383    23:46:35	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:05.383     23:46:35	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:13:05.383    23:46:35	-- bdev/blockdev.sh@380 -- # iostat_result=2068.00
00:13:05.383    23:46:35	-- bdev/blockdev.sh@383 -- # echo 2068
00:13:05.383  ************************************
00:13:05.383  END TEST bdev_qos_ro_bw
00:13:05.383  ************************************
00:13:05.383   23:46:35	-- bdev/blockdev.sh@390 -- # qos_result=2068
00:13:05.383   23:46:35	-- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:05.383   23:46:35	-- bdev/blockdev.sh@392 -- # qos_limit=2048
00:13:05.383   23:46:35	-- bdev/blockdev.sh@394 -- # lower_limit=1843
00:13:05.383   23:46:35	-- bdev/blockdev.sh@395 -- # upper_limit=2252
00:13:05.383   23:46:35	-- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']'
00:13:05.383   23:46:35	-- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']'
00:13:05.383  
00:13:05.383  real	0m5.172s
00:13:05.383  user	0m0.113s
00:13:05.383  sys	0m0.034s
00:13:05.383   23:46:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:05.383   23:46:35	-- common/autotest_common.sh@10 -- # set +x
00:13:05.383   23:46:35	-- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:13:05.383   23:46:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:05.383   23:46:35	-- common/autotest_common.sh@10 -- # set +x
00:13:05.950   23:46:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:05.950   23:46:36	-- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1
00:13:05.950   23:46:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:05.950   23:46:36	-- common/autotest_common.sh@10 -- # set +x
00:13:05.950  
00:13:05.950                                                                                                  Latency(us)
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:05.950  	 Malloc_0            :      26.68   27788.26     108.55       0.00     0.00    9127.15    1891.61  503316.48
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:05.950  	 Null_1              :      26.87   29101.17     113.68       0.00     0.00    8781.88     618.12  186837.18
00:13:05.950  
[2024-12-13T23:46:36.682Z]  ===================================================================================================================
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Total                       :              56889.44     222.22       0.00     0.00    8949.93     618.12  503316.48
00:13:05.950  0
00:13:05.950   23:46:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:05.950   23:46:36	-- bdev/blockdev.sh@459 -- # killprocess 110755
00:13:05.950   23:46:36	-- common/autotest_common.sh@936 -- # '[' -z 110755 ']'
00:13:05.950   23:46:36	-- common/autotest_common.sh@940 -- # kill -0 110755
00:13:05.950    23:46:36	-- common/autotest_common.sh@941 -- # uname
00:13:05.950   23:46:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:05.950    23:46:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110755
00:13:05.950   23:46:36	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:13:05.950   23:46:36	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:13:05.950   23:46:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 110755'
00:13:05.950  killing process with pid 110755
00:13:05.950   23:46:36	-- common/autotest_common.sh@955 -- # kill 110755
00:13:05.950  Received shutdown signal, test time was about 26.908163 seconds
00:13:05.950  
00:13:05.950                                                                                                  Latency(us)
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:05.950  
[2024-12-13T23:46:36.682Z]  ===================================================================================================================
00:13:05.950  
[2024-12-13T23:46:36.682Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:05.950   23:46:36	-- common/autotest_common.sh@960 -- # wait 110755
00:13:07.326   23:46:37	-- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT
00:13:07.326  
00:13:07.326  real	0m29.399s
00:13:07.326  user	0m30.113s
00:13:07.326  sys	0m0.668s
00:13:07.326   23:46:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:07.326   23:46:37	-- common/autotest_common.sh@10 -- # set +x
00:13:07.326  ************************************
00:13:07.326  END TEST bdev_qos
00:13:07.326  ************************************
00:13:07.326   23:46:37	-- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:13:07.326   23:46:37	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:07.326   23:46:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:07.326   23:46:37	-- common/autotest_common.sh@10 -- # set +x
00:13:07.326  ************************************
00:13:07.326  START TEST bdev_qd_sampling
00:13:07.326  ************************************
00:13:07.326  Process bdev QD sampling period testing pid: 111232
00:13:07.326  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:07.326   23:46:37	-- common/autotest_common.sh@1114 -- # qd_sampling_test_suite ''
00:13:07.326   23:46:37	-- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD
00:13:07.326   23:46:37	-- bdev/blockdev.sh@539 -- # QD_PID=111232
00:13:07.326   23:46:37	-- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 111232'
00:13:07.326   23:46:37	-- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:13:07.326   23:46:37	-- bdev/blockdev.sh@542 -- # waitforlisten 111232
00:13:07.326   23:46:37	-- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:13:07.326   23:46:37	-- common/autotest_common.sh@829 -- # '[' -z 111232 ']'
00:13:07.326   23:46:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:07.326   23:46:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:07.326   23:46:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:07.326   23:46:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:07.326   23:46:37	-- common/autotest_common.sh@10 -- # set +x
00:13:07.326  [2024-12-13 23:46:37.962260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:07.326  [2024-12-13 23:46:37.962729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111232 ]
00:13:07.585  [2024-12-13 23:46:38.141331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:07.843  [2024-12-13 23:46:38.351316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:07.843  [2024-12-13 23:46:38.351322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:08.409   23:46:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:08.409   23:46:38	-- common/autotest_common.sh@862 -- # return 0
00:13:08.409   23:46:38	-- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:13:08.409   23:46:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:08.409   23:46:38	-- common/autotest_common.sh@10 -- # set +x
00:13:08.409  Malloc_QD
00:13:08.409   23:46:39	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:08.409   23:46:39	-- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD
00:13:08.409   23:46:39	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD
00:13:08.409   23:46:39	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:08.409   23:46:39	-- common/autotest_common.sh@899 -- # local i
00:13:08.409   23:46:39	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:08.409   23:46:39	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:08.409   23:46:39	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:08.409   23:46:39	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:08.409   23:46:39	-- common/autotest_common.sh@10 -- # set +x
00:13:08.409   23:46:39	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:08.409   23:46:39	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:13:08.409   23:46:39	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:08.409   23:46:39	-- common/autotest_common.sh@10 -- # set +x
00:13:08.409  [
00:13:08.409  {
00:13:08.409  "name": "Malloc_QD",
00:13:08.409  "aliases": [
00:13:08.409  "4edee338-30cb-457e-925a-34e5ec563ce4"
00:13:08.409  ],
00:13:08.409  "product_name": "Malloc disk",
00:13:08.409  "block_size": 512,
00:13:08.409  "num_blocks": 262144,
00:13:08.409  "uuid": "4edee338-30cb-457e-925a-34e5ec563ce4",
00:13:08.409  "assigned_rate_limits": {
00:13:08.409  "rw_ios_per_sec": 0,
00:13:08.409  "rw_mbytes_per_sec": 0,
00:13:08.409  "r_mbytes_per_sec": 0,
00:13:08.409  "w_mbytes_per_sec": 0
00:13:08.409  },
00:13:08.409  "claimed": false,
00:13:08.409  "zoned": false,
00:13:08.409  "supported_io_types": {
00:13:08.409  "read": true,
00:13:08.409  "write": true,
00:13:08.409  "unmap": true,
00:13:08.409  "write_zeroes": true,
00:13:08.409  "flush": true,
00:13:08.409  "reset": true,
00:13:08.409  "compare": false,
00:13:08.409  "compare_and_write": false,
00:13:08.409  "abort": true,
00:13:08.409  "nvme_admin": false,
00:13:08.409  "nvme_io": false
00:13:08.409  },
00:13:08.409  "memory_domains": [
00:13:08.409  {
00:13:08.409  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:08.409  "dma_device_type": 2
00:13:08.409  }
00:13:08.409  ],
00:13:08.409  "driver_specific": {}
00:13:08.409  }
00:13:08.409  ]
00:13:08.409   23:46:39	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:08.409   23:46:39	-- common/autotest_common.sh@905 -- # return 0
00:13:08.409   23:46:39	-- bdev/blockdev.sh@548 -- # sleep 2
00:13:08.409   23:46:39	-- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:08.666  Running I/O for 5 seconds...
00:13:10.569   23:46:41	-- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD
00:13:10.569   23:46:41	-- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD
00:13:10.569   23:46:41	-- bdev/blockdev.sh@518 -- # local sampling_period=10
00:13:10.569   23:46:41	-- bdev/blockdev.sh@519 -- # local iostats
00:13:10.569   23:46:41	-- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:13:10.569   23:46:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.569   23:46:41	-- common/autotest_common.sh@10 -- # set +x
00:13:10.569   23:46:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.569    23:46:41	-- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:13:10.569    23:46:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.569    23:46:41	-- common/autotest_common.sh@10 -- # set +x
00:13:10.569    23:46:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.569   23:46:41	-- bdev/blockdev.sh@523 -- # iostats='{
00:13:10.569  "tick_rate": 2200000000,
00:13:10.569  "ticks": 1657513491465,
00:13:10.569  "bdevs": [
00:13:10.569  {
00:13:10.569  "name": "Malloc_QD",
00:13:10.569  "bytes_read": 575705600,
00:13:10.569  "num_read_ops": 140547,
00:13:10.569  "bytes_written": 0,
00:13:10.569  "num_write_ops": 0,
00:13:10.569  "bytes_unmapped": 0,
00:13:10.569  "num_unmap_ops": 0,
00:13:10.569  "bytes_copied": 0,
00:13:10.569  "num_copy_ops": 0,
00:13:10.569  "read_latency_ticks": 2150745991576,
00:13:10.569  "max_read_latency_ticks": 25696893,
00:13:10.569  "min_read_latency_ticks": 347164,
00:13:10.569  "write_latency_ticks": 0,
00:13:10.569  "max_write_latency_ticks": 0,
00:13:10.569  "min_write_latency_ticks": 0,
00:13:10.569  "unmap_latency_ticks": 0,
00:13:10.569  "max_unmap_latency_ticks": 0,
00:13:10.569  "min_unmap_latency_ticks": 0,
00:13:10.569  "copy_latency_ticks": 0,
00:13:10.569  "max_copy_latency_ticks": 0,
00:13:10.569  "min_copy_latency_ticks": 0,
00:13:10.569  "io_error": {},
00:13:10.569  "queue_depth_polling_period": 10,
00:13:10.569  "queue_depth": 512,
00:13:10.569  "io_time": 20,
00:13:10.569  "weighted_io_time": 10240
00:13:10.569  }
00:13:10.569  ]
00:13:10.569  }'
00:13:10.569    23:46:41	-- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:13:10.569   23:46:41	-- bdev/blockdev.sh@525 -- # qd_sampling_period=10
00:13:10.569   23:46:41	-- bdev/blockdev.sh@527 -- # '[' 10 == null ']'
00:13:10.569   23:46:41	-- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']'
00:13:10.569   23:46:41	-- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:13:10.569   23:46:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.569   23:46:41	-- common/autotest_common.sh@10 -- # set +x
00:13:10.569  
00:13:10.569                                                                                                  Latency(us)
00:13:10.569  
[2024-12-13T23:46:41.301Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:10.569  
[2024-12-13T23:46:41.301Z]  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:13:10.569  	 Malloc_QD           :       1.99   35351.91     138.09       0.00     0.00    7223.70    1482.01   11736.90
00:13:10.569  
[2024-12-13T23:46:41.301Z]  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:10.569  	 Malloc_QD           :       1.99   38031.05     148.56       0.00     0.00    6716.36     726.11    8817.57
00:13:10.569  
[2024-12-13T23:46:41.301Z]  ===================================================================================================================
00:13:10.569  
[2024-12-13T23:46:41.301Z]  Total                       :              73382.96     286.65       0.00     0.00    6960.70     726.11   11736.90
00:13:10.828   23:46:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.828  0
00:13:10.828   23:46:41	-- bdev/blockdev.sh@552 -- # killprocess 111232
00:13:10.828   23:46:41	-- common/autotest_common.sh@936 -- # '[' -z 111232 ']'
00:13:10.828   23:46:41	-- common/autotest_common.sh@940 -- # kill -0 111232
00:13:10.828    23:46:41	-- common/autotest_common.sh@941 -- # uname
00:13:10.828   23:46:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:10.828    23:46:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111232
00:13:10.828   23:46:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:10.828   23:46:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:10.828   23:46:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111232'
00:13:10.828  killing process with pid 111232
00:13:10.828   23:46:41	-- common/autotest_common.sh@955 -- # kill 111232
00:13:10.828   23:46:41	-- common/autotest_common.sh@960 -- # wait 111232
00:13:10.828  Received shutdown signal, test time was about 2.114925 seconds
00:13:10.828  
00:13:10.828                                                                                                  Latency(us)
00:13:10.828  
[2024-12-13T23:46:41.560Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:10.828  
[2024-12-13T23:46:41.560Z]  ===================================================================================================================
00:13:10.828  
[2024-12-13T23:46:41.560Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:11.765   23:46:42	-- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT
00:13:11.765  
00:13:11.765  real	0m4.547s
00:13:11.765  user	0m8.444s
00:13:11.765  sys	0m0.459s
00:13:11.765   23:46:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:11.765   23:46:42	-- common/autotest_common.sh@10 -- # set +x
00:13:11.765  ************************************
00:13:11.765  END TEST bdev_qd_sampling
00:13:11.765  ************************************
00:13:11.765   23:46:42	-- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite ''
00:13:11.765   23:46:42	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:11.765   23:46:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:11.765   23:46:42	-- common/autotest_common.sh@10 -- # set +x
00:13:12.023  ************************************
00:13:12.023  START TEST bdev_error
00:13:12.023  ************************************
00:13:12.023   23:46:42	-- common/autotest_common.sh@1114 -- # error_test_suite ''
00:13:12.023   23:46:42	-- bdev/blockdev.sh@464 -- # DEV_1=Dev_1
00:13:12.023   23:46:42	-- bdev/blockdev.sh@465 -- # DEV_2=Dev_2
00:13:12.023   23:46:42	-- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1
00:13:12.023   23:46:42	-- bdev/blockdev.sh@470 -- # ERR_PID=111319
00:13:12.023  Process error testing pid: 111319
00:13:12.023   23:46:42	-- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 111319'
00:13:12.023   23:46:42	-- bdev/blockdev.sh@472 -- # waitforlisten 111319
00:13:12.023   23:46:42	-- common/autotest_common.sh@829 -- # '[' -z 111319 ']'
00:13:12.023   23:46:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:12.023   23:46:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:12.023  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:12.023   23:46:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:12.023   23:46:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:12.023   23:46:42	-- common/autotest_common.sh@10 -- # set +x
00:13:12.023   23:46:42	-- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:13:12.023  [2024-12-13 23:46:42.553193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:12.023  [2024-12-13 23:46:42.553497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111319 ]
00:13:12.023  [2024-12-13 23:46:42.705113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:12.282  [2024-12-13 23:46:42.899109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:12.849   23:46:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:12.849   23:46:43	-- common/autotest_common.sh@862 -- # return 0
00:13:12.849   23:46:43	-- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:13:12.849   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.849   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121  Dev_1
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- bdev/blockdev.sh@475 -- # waitforbdev Dev_1
00:13:13.121   23:46:43	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_1
00:13:13.121   23:46:43	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:13.121   23:46:43	-- common/autotest_common.sh@899 -- # local i
00:13:13.121   23:46:43	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:13.121   23:46:43	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:13.121   23:46:43	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121  [
00:13:13.121  {
00:13:13.121  "name": "Dev_1",
00:13:13.121  "aliases": [
00:13:13.121  "6da1b5dc-4b81-4438-8da0-cd0131ac5b45"
00:13:13.121  ],
00:13:13.121  "product_name": "Malloc disk",
00:13:13.121  "block_size": 512,
00:13:13.121  "num_blocks": 262144,
00:13:13.121  "uuid": "6da1b5dc-4b81-4438-8da0-cd0131ac5b45",
00:13:13.121  "assigned_rate_limits": {
00:13:13.121  "rw_ios_per_sec": 0,
00:13:13.121  "rw_mbytes_per_sec": 0,
00:13:13.121  "r_mbytes_per_sec": 0,
00:13:13.121  "w_mbytes_per_sec": 0
00:13:13.121  },
00:13:13.121  "claimed": false,
00:13:13.121  "zoned": false,
00:13:13.121  "supported_io_types": {
00:13:13.121  "read": true,
00:13:13.121  "write": true,
00:13:13.121  "unmap": true,
00:13:13.121  "write_zeroes": true,
00:13:13.121  "flush": true,
00:13:13.121  "reset": true,
00:13:13.121  "compare": false,
00:13:13.121  "compare_and_write": false,
00:13:13.121  "abort": true,
00:13:13.121  "nvme_admin": false,
00:13:13.121  "nvme_io": false
00:13:13.121  },
00:13:13.121  "memory_domains": [
00:13:13.121  {
00:13:13.121  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:13.121  "dma_device_type": 2
00:13:13.121  }
00:13:13.121  ],
00:13:13.121  "driver_specific": {}
00:13:13.121  }
00:13:13.121  ]
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- common/autotest_common.sh@905 -- # return 0
00:13:13.121   23:46:43	-- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121  true
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121  Dev_2
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- bdev/blockdev.sh@478 -- # waitforbdev Dev_2
00:13:13.121   23:46:43	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_2
00:13:13.121   23:46:43	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:13.121   23:46:43	-- common/autotest_common.sh@899 -- # local i
00:13:13.121   23:46:43	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:13.121   23:46:43	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:13.121   23:46:43	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.121   23:46:43	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:13:13.121   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.121   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.121  [
00:13:13.121  {
00:13:13.121  "name": "Dev_2",
00:13:13.121  "aliases": [
00:13:13.121  "03811206-27d6-451b-8bd0-c29fff6e4516"
00:13:13.121  ],
00:13:13.121  "product_name": "Malloc disk",
00:13:13.121  "block_size": 512,
00:13:13.122  "num_blocks": 262144,
00:13:13.122  "uuid": "03811206-27d6-451b-8bd0-c29fff6e4516",
00:13:13.122  "assigned_rate_limits": {
00:13:13.122  "rw_ios_per_sec": 0,
00:13:13.122  "rw_mbytes_per_sec": 0,
00:13:13.122  "r_mbytes_per_sec": 0,
00:13:13.122  "w_mbytes_per_sec": 0
00:13:13.122  },
00:13:13.122  "claimed": false,
00:13:13.122  "zoned": false,
00:13:13.122  "supported_io_types": {
00:13:13.122  "read": true,
00:13:13.122  "write": true,
00:13:13.122  "unmap": true,
00:13:13.122  "write_zeroes": true,
00:13:13.122  "flush": true,
00:13:13.122  "reset": true,
00:13:13.122  "compare": false,
00:13:13.122  "compare_and_write": false,
00:13:13.122  "abort": true,
00:13:13.122  "nvme_admin": false,
00:13:13.122  "nvme_io": false
00:13:13.122  },
00:13:13.122  "memory_domains": [
00:13:13.122  {
00:13:13.122  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:13.122  "dma_device_type": 2
00:13:13.122  }
00:13:13.122  ],
00:13:13.122  "driver_specific": {}
00:13:13.122  }
00:13:13.122  ]
00:13:13.122   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.122   23:46:43	-- common/autotest_common.sh@905 -- # return 0
00:13:13.122   23:46:43	-- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:13:13.122   23:46:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.122   23:46:43	-- common/autotest_common.sh@10 -- # set +x
00:13:13.122   23:46:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.122   23:46:43	-- bdev/blockdev.sh@482 -- # sleep 1
00:13:13.122   23:46:43	-- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:13:13.400  Running I/O for 5 seconds...
00:13:14.336   23:46:44	-- bdev/blockdev.sh@485 -- # kill -0 111319
00:13:14.336   23:46:44	-- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 111319'
00:13:14.336  Process is existed as continue on error is set. Pid: 111319
00:13:14.336   23:46:44	-- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:13:14.336   23:46:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:14.336   23:46:44	-- common/autotest_common.sh@10 -- # set +x
00:13:14.336   23:46:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:14.336   23:46:44	-- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1
00:13:14.336   23:46:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:14.336   23:46:44	-- common/autotest_common.sh@10 -- # set +x
00:13:14.336  Timeout while waiting for response:
00:13:14.336  
00:13:14.336  
00:13:14.593   23:46:45	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:14.593   23:46:45	-- bdev/blockdev.sh@495 -- # sleep 5
00:13:18.781  
00:13:18.781                                                                                                  Latency(us)
00:13:18.781  
[2024-12-13T23:46:49.513Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:18.781  
[2024-12-13T23:46:49.513Z]  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:18.781  	 EE_Dev_1            :       0.91   46158.68     180.31       5.49     0.00     344.20     140.57    1243.69
00:13:18.781  
[2024-12-13T23:46:49.513Z]  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:18.781  	 Dev_2               :       5.00   95124.26     371.58       0.00     0.00     165.86      53.53  289788.28
00:13:18.781  
[2024-12-13T23:46:49.513Z]  ===================================================================================================================
00:13:18.781  
[2024-12-13T23:46:49.513Z]  Total                       :             141282.94     551.89       5.49     0.00     180.33      53.53  289788.28
00:13:19.717   23:46:50	-- bdev/blockdev.sh@497 -- # killprocess 111319
00:13:19.717   23:46:50	-- common/autotest_common.sh@936 -- # '[' -z 111319 ']'
00:13:19.717   23:46:50	-- common/autotest_common.sh@940 -- # kill -0 111319
00:13:19.717    23:46:50	-- common/autotest_common.sh@941 -- # uname
00:13:19.717   23:46:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:19.717    23:46:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111319
00:13:19.717  killing process with pid 111319
00:13:19.717  Received shutdown signal, test time was about 5.000000 seconds
00:13:19.717  
00:13:19.717                                                                                                  Latency(us)
00:13:19.717  
[2024-12-13T23:46:50.449Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:19.717  
[2024-12-13T23:46:50.449Z]  ===================================================================================================================
00:13:19.717  
[2024-12-13T23:46:50.449Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:19.717   23:46:50	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:13:19.717   23:46:50	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:13:19.717   23:46:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111319'
00:13:19.717   23:46:50	-- common/autotest_common.sh@955 -- # kill 111319
00:13:19.717   23:46:50	-- common/autotest_common.sh@960 -- # wait 111319
00:13:21.093   23:46:51	-- bdev/blockdev.sh@501 -- # ERR_PID=111441
00:13:21.093   23:46:51	-- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 111441'
00:13:21.093  Process error testing pid: 111441
00:13:21.093   23:46:51	-- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:13:21.093   23:46:51	-- bdev/blockdev.sh@503 -- # waitforlisten 111441
00:13:21.093   23:46:51	-- common/autotest_common.sh@829 -- # '[' -z 111441 ']'
00:13:21.093   23:46:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:21.094   23:46:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:21.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:21.094   23:46:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:21.094   23:46:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:21.094   23:46:51	-- common/autotest_common.sh@10 -- # set +x
00:13:21.094  [2024-12-13 23:46:51.477457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:21.094  [2024-12-13 23:46:51.477678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111441 ]
00:13:21.094  [2024-12-13 23:46:51.648633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:21.352  [2024-12-13 23:46:51.840211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:21.611   23:46:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:21.611   23:46:52	-- common/autotest_common.sh@862 -- # return 0
00:13:21.611   23:46:52	-- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:13:21.611   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.611   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:21.870  Dev_1
00:13:21.870   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:21.870   23:46:52	-- bdev/blockdev.sh@506 -- # waitforbdev Dev_1
00:13:21.870   23:46:52	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_1
00:13:21.870   23:46:52	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:21.870   23:46:52	-- common/autotest_common.sh@899 -- # local i
00:13:21.870   23:46:52	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:21.870   23:46:52	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:21.870   23:46:52	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:21.870   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.870   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:21.870   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:21.870   23:46:52	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:13:21.870   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.870   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:21.870  [
00:13:21.870  {
00:13:21.870  "name": "Dev_1",
00:13:21.870  "aliases": [
00:13:21.870  "a99e6046-8ca6-4d1c-afb2-91ea7e34a2bd"
00:13:21.870  ],
00:13:21.870  "product_name": "Malloc disk",
00:13:21.870  "block_size": 512,
00:13:21.870  "num_blocks": 262144,
00:13:21.870  "uuid": "a99e6046-8ca6-4d1c-afb2-91ea7e34a2bd",
00:13:21.870  "assigned_rate_limits": {
00:13:21.870  "rw_ios_per_sec": 0,
00:13:21.870  "rw_mbytes_per_sec": 0,
00:13:21.870  "r_mbytes_per_sec": 0,
00:13:21.870  "w_mbytes_per_sec": 0
00:13:21.870  },
00:13:21.870  "claimed": false,
00:13:21.870  "zoned": false,
00:13:21.870  "supported_io_types": {
00:13:21.870  "read": true,
00:13:21.870  "write": true,
00:13:21.870  "unmap": true,
00:13:21.870  "write_zeroes": true,
00:13:21.870  "flush": true,
00:13:21.870  "reset": true,
00:13:21.870  "compare": false,
00:13:21.870  "compare_and_write": false,
00:13:21.870  "abort": true,
00:13:21.870  "nvme_admin": false,
00:13:21.870  "nvme_io": false
00:13:21.870  },
00:13:21.870  "memory_domains": [
00:13:21.870  {
00:13:21.870  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:21.870  "dma_device_type": 2
00:13:21.870  }
00:13:21.870  ],
00:13:21.870  "driver_specific": {}
00:13:21.870  }
00:13:21.870  ]
00:13:21.870   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:21.870   23:46:52	-- common/autotest_common.sh@905 -- # return 0
00:13:21.870   23:46:52	-- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1
00:13:21.870   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.870   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:21.870  true
00:13:21.870   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:21.870   23:46:52	-- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:13:21.870   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.870   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:22.129  Dev_2
00:13:22.129   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:22.129   23:46:52	-- bdev/blockdev.sh@509 -- # waitforbdev Dev_2
00:13:22.129   23:46:52	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_2
00:13:22.129   23:46:52	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:22.129   23:46:52	-- common/autotest_common.sh@899 -- # local i
00:13:22.129   23:46:52	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:22.129   23:46:52	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:22.129   23:46:52	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:22.129   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:22.129   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:22.129   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:22.129   23:46:52	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:13:22.129   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:22.129   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:22.129  [
00:13:22.129  {
00:13:22.129  "name": "Dev_2",
00:13:22.129  "aliases": [
00:13:22.129  "2f3939cc-d30c-4119-9c11-8b0351ad691a"
00:13:22.129  ],
00:13:22.129  "product_name": "Malloc disk",
00:13:22.129  "block_size": 512,
00:13:22.129  "num_blocks": 262144,
00:13:22.129  "uuid": "2f3939cc-d30c-4119-9c11-8b0351ad691a",
00:13:22.129  "assigned_rate_limits": {
00:13:22.129  "rw_ios_per_sec": 0,
00:13:22.129  "rw_mbytes_per_sec": 0,
00:13:22.129  "r_mbytes_per_sec": 0,
00:13:22.129  "w_mbytes_per_sec": 0
00:13:22.129  },
00:13:22.129  "claimed": false,
00:13:22.129  "zoned": false,
00:13:22.129  "supported_io_types": {
00:13:22.129  "read": true,
00:13:22.129  "write": true,
00:13:22.129  "unmap": true,
00:13:22.129  "write_zeroes": true,
00:13:22.129  "flush": true,
00:13:22.129  "reset": true,
00:13:22.129  "compare": false,
00:13:22.129  "compare_and_write": false,
00:13:22.129  "abort": true,
00:13:22.129  "nvme_admin": false,
00:13:22.129  "nvme_io": false
00:13:22.129  },
00:13:22.129  "memory_domains": [
00:13:22.129  {
00:13:22.129  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:22.129  "dma_device_type": 2
00:13:22.129  }
00:13:22.129  ],
00:13:22.129  "driver_specific": {}
00:13:22.129  }
00:13:22.129  ]
00:13:22.129   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:22.129   23:46:52	-- common/autotest_common.sh@905 -- # return 0
00:13:22.129   23:46:52	-- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:13:22.129   23:46:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:22.129   23:46:52	-- common/autotest_common.sh@10 -- # set +x
00:13:22.129   23:46:52	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:22.129   23:46:52	-- bdev/blockdev.sh@513 -- # NOT wait 111441
00:13:22.129   23:46:52	-- common/autotest_common.sh@650 -- # local es=0
00:13:22.129   23:46:52	-- common/autotest_common.sh@652 -- # valid_exec_arg wait 111441
00:13:22.129   23:46:52	-- common/autotest_common.sh@638 -- # local arg=wait
00:13:22.129   23:46:52	-- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:13:22.129   23:46:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:22.129    23:46:52	-- common/autotest_common.sh@642 -- # type -t wait
00:13:22.129   23:46:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:22.129   23:46:52	-- common/autotest_common.sh@653 -- # wait 111441
00:13:22.129  Running I/O for 5 seconds...
00:13:22.129  task offset: 11232 on job bdev=EE_Dev_1 fails
00:13:22.129  
00:13:22.129                                                                                                  Latency(us)
00:13:22.129  
[2024-12-13T23:46:52.861Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:22.129  
[2024-12-13T23:46:52.861Z]  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:22.129  
[2024-12-13T23:46:52.861Z]  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:13:22.129  	 EE_Dev_1            :       0.00   30555.56     119.36    6944.44     0.00     344.38     143.36     625.57
00:13:22.129  
[2024-12-13T23:46:52.861Z]  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:22.129  	 Dev_2               :       0.00   22176.02      86.63       0.00     0.00     478.15     110.31     878.78
00:13:22.129  
[2024-12-13T23:46:52.861Z]  ===================================================================================================================
00:13:22.129  
[2024-12-13T23:46:52.861Z]  Total                       :              52731.58     205.98    6944.44     0.00     416.94     110.31     878.78
00:13:22.129  [2024-12-13 23:46:52.770857] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:22.129  request:
00:13:22.129  {
00:13:22.129    "method": "perform_tests",
00:13:22.129    "req_id": 1
00:13:22.129  }
00:13:22.129  Got JSON-RPC error response
00:13:22.129  response:
00:13:22.129  {
00:13:22.129    "code": -32603,
00:13:22.129    "message": "bdevperf failed with error Operation not permitted"
00:13:22.129  }
00:13:24.033   23:46:54	-- common/autotest_common.sh@653 -- # es=255
00:13:24.033   23:46:54	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:13:24.033   23:46:54	-- common/autotest_common.sh@662 -- # es=127
00:13:24.033   23:46:54	-- common/autotest_common.sh@663 -- # case "$es" in
00:13:24.033   23:46:54	-- common/autotest_common.sh@670 -- # es=1
00:13:24.033   23:46:54	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:13:24.033  
00:13:24.033  real	0m11.860s
00:13:24.033  user	0m11.787s
00:13:24.033  sys	0m0.942s
00:13:24.033   23:46:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:24.033  ************************************
00:13:24.033  END TEST bdev_error
00:13:24.033   23:46:54	-- common/autotest_common.sh@10 -- # set +x
00:13:24.033  ************************************
00:13:24.033   23:46:54	-- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite ''
00:13:24.033   23:46:54	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:24.033   23:46:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:24.033   23:46:54	-- common/autotest_common.sh@10 -- # set +x
00:13:24.033  ************************************
00:13:24.033  START TEST bdev_stat
00:13:24.033  ************************************
00:13:24.033   23:46:54	-- common/autotest_common.sh@1114 -- # stat_test_suite ''
00:13:24.033   23:46:54	-- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT
00:13:24.033   23:46:54	-- bdev/blockdev.sh@594 -- # STAT_PID=111504
00:13:24.033   23:46:54	-- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 111504'
00:13:24.033  Process Bdev IO statistics testing pid: 111504
00:13:24.033   23:46:54	-- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:13:24.033   23:46:54	-- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:13:24.033   23:46:54	-- bdev/blockdev.sh@597 -- # waitforlisten 111504
00:13:24.033   23:46:54	-- common/autotest_common.sh@829 -- # '[' -z 111504 ']'
00:13:24.033   23:46:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:24.033   23:46:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:24.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:24.033   23:46:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:24.033   23:46:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:24.033   23:46:54	-- common/autotest_common.sh@10 -- # set +x
00:13:24.033  [2024-12-13 23:46:54.495826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:24.033  [2024-12-13 23:46:54.496038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111504 ]
00:13:24.033  [2024-12-13 23:46:54.672750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:24.292  [2024-12-13 23:46:54.908329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:24.292  [2024-12-13 23:46:54.908354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:24.862   23:46:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:24.862   23:46:55	-- common/autotest_common.sh@862 -- # return 0
00:13:24.862   23:46:55	-- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:13:24.862   23:46:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:24.862   23:46:55	-- common/autotest_common.sh@10 -- # set +x
00:13:24.862  Malloc_STAT
00:13:24.862   23:46:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:24.862   23:46:55	-- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT
00:13:24.862   23:46:55	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT
00:13:24.862   23:46:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:24.862   23:46:55	-- common/autotest_common.sh@899 -- # local i
00:13:24.862   23:46:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:24.862   23:46:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:24.862   23:46:55	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:24.862   23:46:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:24.862   23:46:55	-- common/autotest_common.sh@10 -- # set +x
00:13:24.862   23:46:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:24.862   23:46:55	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:13:24.862   23:46:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:24.862   23:46:55	-- common/autotest_common.sh@10 -- # set +x
00:13:24.862  [
00:13:24.862  {
00:13:24.862  "name": "Malloc_STAT",
00:13:24.862  "aliases": [
00:13:24.862  "a4e28783-ffc4-4ba6-8669-c020b6d21430"
00:13:24.862  ],
00:13:24.862  "product_name": "Malloc disk",
00:13:24.862  "block_size": 512,
00:13:24.862  "num_blocks": 262144,
00:13:24.862  "uuid": "a4e28783-ffc4-4ba6-8669-c020b6d21430",
00:13:24.862  "assigned_rate_limits": {
00:13:24.862  "rw_ios_per_sec": 0,
00:13:24.862  "rw_mbytes_per_sec": 0,
00:13:24.862  "r_mbytes_per_sec": 0,
00:13:24.862  "w_mbytes_per_sec": 0
00:13:24.862  },
00:13:24.862  "claimed": false,
00:13:24.862  "zoned": false,
00:13:24.862  "supported_io_types": {
00:13:24.862  "read": true,
00:13:24.862  "write": true,
00:13:24.862  "unmap": true,
00:13:24.862  "write_zeroes": true,
00:13:24.862  "flush": true,
00:13:24.862  "reset": true,
00:13:24.862  "compare": false,
00:13:24.862  "compare_and_write": false,
00:13:24.862  "abort": true,
00:13:24.862  "nvme_admin": false,
00:13:24.862  "nvme_io": false
00:13:24.862  },
00:13:24.862  "memory_domains": [
00:13:24.862  {
00:13:24.862  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:24.862  "dma_device_type": 2
00:13:24.862  }
00:13:24.862  ],
00:13:24.862  "driver_specific": {}
00:13:24.862  }
00:13:24.862  ]
00:13:24.862   23:46:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:24.862   23:46:55	-- common/autotest_common.sh@905 -- # return 0
00:13:24.862   23:46:55	-- bdev/blockdev.sh@603 -- # sleep 2
00:13:24.862   23:46:55	-- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:25.120  Running I/O for 10 seconds...
00:13:27.023   23:46:57	-- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT
00:13:27.023   23:46:57	-- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT
00:13:27.023   23:46:57	-- bdev/blockdev.sh@558 -- # local iostats
00:13:27.023   23:46:57	-- bdev/blockdev.sh@559 -- # local io_count1
00:13:27.023   23:46:57	-- bdev/blockdev.sh@560 -- # local io_count2
00:13:27.023   23:46:57	-- bdev/blockdev.sh@561 -- # local iostats_per_channel
00:13:27.023   23:46:57	-- bdev/blockdev.sh@562 -- # local io_count_per_channel1
00:13:27.023   23:46:57	-- bdev/blockdev.sh@563 -- # local io_count_per_channel2
00:13:27.023   23:46:57	-- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0
00:13:27.023    23:46:57	-- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:13:27.023    23:46:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.023    23:46:57	-- common/autotest_common.sh@10 -- # set +x
00:13:27.023    23:46:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.023   23:46:57	-- bdev/blockdev.sh@566 -- # iostats='{
00:13:27.023  "tick_rate": 2200000000,
00:13:27.023  "ticks": 1693561890015,
00:13:27.023  "bdevs": [
00:13:27.023  {
00:13:27.023  "name": "Malloc_STAT",
00:13:27.023  "bytes_read": 525373952,
00:13:27.023  "num_read_ops": 128259,
00:13:27.023  "bytes_written": 0,
00:13:27.023  "num_write_ops": 0,
00:13:27.023  "bytes_unmapped": 0,
00:13:27.023  "num_unmap_ops": 0,
00:13:27.023  "bytes_copied": 0,
00:13:27.023  "num_copy_ops": 0,
00:13:27.023  "read_latency_ticks": 2139129493266,
00:13:27.023  "max_read_latency_ticks": 22667666,
00:13:27.023  "min_read_latency_ticks": 299788,
00:13:27.023  "write_latency_ticks": 0,
00:13:27.023  "max_write_latency_ticks": 0,
00:13:27.023  "min_write_latency_ticks": 0,
00:13:27.023  "unmap_latency_ticks": 0,
00:13:27.023  "max_unmap_latency_ticks": 0,
00:13:27.023  "min_unmap_latency_ticks": 0,
00:13:27.023  "copy_latency_ticks": 0,
00:13:27.023  "max_copy_latency_ticks": 0,
00:13:27.023  "min_copy_latency_ticks": 0,
00:13:27.023  "io_error": {}
00:13:27.023  }
00:13:27.023  ]
00:13:27.023  }'
00:13:27.023    23:46:57	-- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops'
00:13:27.023   23:46:57	-- bdev/blockdev.sh@567 -- # io_count1=128259
00:13:27.023    23:46:57	-- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:13:27.023    23:46:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.023    23:46:57	-- common/autotest_common.sh@10 -- # set +x
00:13:27.023    23:46:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.023   23:46:57	-- bdev/blockdev.sh@569 -- # iostats_per_channel='{
00:13:27.023  "tick_rate": 2200000000,
00:13:27.023  "ticks": 1693717087989,
00:13:27.023  "name": "Malloc_STAT",
00:13:27.023  "channels": [
00:13:27.023  {
00:13:27.023  "thread_id": 2,
00:13:27.023  "bytes_read": 263192576,
00:13:27.023  "num_read_ops": 64256,
00:13:27.023  "bytes_written": 0,
00:13:27.023  "num_write_ops": 0,
00:13:27.023  "bytes_unmapped": 0,
00:13:27.023  "num_unmap_ops": 0,
00:13:27.023  "bytes_copied": 0,
00:13:27.023  "num_copy_ops": 0,
00:13:27.023  "read_latency_ticks": 1108488376947,
00:13:27.023  "max_read_latency_ticks": 22667666,
00:13:27.023  "min_read_latency_ticks": 12400244,
00:13:27.023  "write_latency_ticks": 0,
00:13:27.023  "max_write_latency_ticks": 0,
00:13:27.023  "min_write_latency_ticks": 0,
00:13:27.023  "unmap_latency_ticks": 0,
00:13:27.023  "max_unmap_latency_ticks": 0,
00:13:27.023  "min_unmap_latency_ticks": 0,
00:13:27.023  "copy_latency_ticks": 0,
00:13:27.023  "max_copy_latency_ticks": 0,
00:13:27.023  "min_copy_latency_ticks": 0
00:13:27.023  },
00:13:27.023  {
00:13:27.023  "thread_id": 3,
00:13:27.023  "bytes_read": 282066944,
00:13:27.023  "num_read_ops": 68864,
00:13:27.023  "bytes_written": 0,
00:13:27.023  "num_write_ops": 0,
00:13:27.023  "bytes_unmapped": 0,
00:13:27.023  "num_unmap_ops": 0,
00:13:27.023  "bytes_copied": 0,
00:13:27.023  "num_copy_ops": 0,
00:13:27.023  "read_latency_ticks": 1110066057789,
00:13:27.023  "max_read_latency_ticks": 19609904,
00:13:27.023  "min_read_latency_ticks": 10935654,
00:13:27.023  "write_latency_ticks": 0,
00:13:27.023  "max_write_latency_ticks": 0,
00:13:27.023  "min_write_latency_ticks": 0,
00:13:27.023  "unmap_latency_ticks": 0,
00:13:27.023  "max_unmap_latency_ticks": 0,
00:13:27.023  "min_unmap_latency_ticks": 0,
00:13:27.023  "copy_latency_ticks": 0,
00:13:27.023  "max_copy_latency_ticks": 0,
00:13:27.023  "min_copy_latency_ticks": 0
00:13:27.023  }
00:13:27.023  ]
00:13:27.023  }'
00:13:27.023    23:46:57	-- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops'
00:13:27.023   23:46:57	-- bdev/blockdev.sh@570 -- # io_count_per_channel1=64256
00:13:27.023   23:46:57	-- bdev/blockdev.sh@571 -- # io_count_per_channel_all=64256
00:13:27.023    23:46:57	-- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops'
00:13:27.023   23:46:57	-- bdev/blockdev.sh@572 -- # io_count_per_channel2=68864
00:13:27.023   23:46:57	-- bdev/blockdev.sh@573 -- # io_count_per_channel_all=133120
00:13:27.023    23:46:57	-- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:13:27.023    23:46:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.023    23:46:57	-- common/autotest_common.sh@10 -- # set +x
00:13:27.023    23:46:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.023   23:46:57	-- bdev/blockdev.sh@575 -- # iostats='{
00:13:27.024  "tick_rate": 2200000000,
00:13:27.024  "ticks": 1693991092569,
00:13:27.024  "bdevs": [
00:13:27.024  {
00:13:27.024  "name": "Malloc_STAT",
00:13:27.024  "bytes_read": 579899904,
00:13:27.024  "num_read_ops": 141571,
00:13:27.024  "bytes_written": 0,
00:13:27.024  "num_write_ops": 0,
00:13:27.024  "bytes_unmapped": 0,
00:13:27.024  "num_unmap_ops": 0,
00:13:27.024  "bytes_copied": 0,
00:13:27.024  "num_copy_ops": 0,
00:13:27.024  "read_latency_ticks": 2358360521769,
00:13:27.024  "max_read_latency_ticks": 22667666,
00:13:27.024  "min_read_latency_ticks": 299788,
00:13:27.024  "write_latency_ticks": 0,
00:13:27.024  "max_write_latency_ticks": 0,
00:13:27.024  "min_write_latency_ticks": 0,
00:13:27.024  "unmap_latency_ticks": 0,
00:13:27.024  "max_unmap_latency_ticks": 0,
00:13:27.024  "min_unmap_latency_ticks": 0,
00:13:27.024  "copy_latency_ticks": 0,
00:13:27.024  "max_copy_latency_ticks": 0,
00:13:27.024  "min_copy_latency_ticks": 0,
00:13:27.024  "io_error": {}
00:13:27.024  }
00:13:27.024  ]
00:13:27.024  }'
00:13:27.024    23:46:57	-- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops'
00:13:27.283   23:46:57	-- bdev/blockdev.sh@576 -- # io_count2=141571
00:13:27.283   23:46:57	-- bdev/blockdev.sh@581 -- # '[' 133120 -lt 128259 ']'
00:13:27.283   23:46:57	-- bdev/blockdev.sh@581 -- # '[' 133120 -gt 141571 ']'
00:13:27.283   23:46:57	-- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:13:27.283   23:46:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.283   23:46:57	-- common/autotest_common.sh@10 -- # set +x
00:13:27.283  
00:13:27.283                                                                                                  Latency(us)
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:13:27.283  	 Malloc_STAT         :       2.18   32589.54     127.30       0.00     0.00    7833.03    1444.77   10307.03
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:27.283  	 Malloc_STAT         :       2.19   35011.89     136.77       0.00     0.00    7291.71     867.61    8936.73
00:13:27.283  
[2024-12-13T23:46:58.015Z]  ===================================================================================================================
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Total                       :              67601.43     264.07       0.00     0.00    7552.52     867.61   10307.03
00:13:27.283  0
00:13:27.283   23:46:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.283   23:46:57	-- bdev/blockdev.sh@607 -- # killprocess 111504
00:13:27.283   23:46:57	-- common/autotest_common.sh@936 -- # '[' -z 111504 ']'
00:13:27.283   23:46:57	-- common/autotest_common.sh@940 -- # kill -0 111504
00:13:27.283    23:46:57	-- common/autotest_common.sh@941 -- # uname
00:13:27.283   23:46:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:27.283    23:46:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111504
00:13:27.283   23:46:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:27.283   23:46:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:27.283   23:46:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111504'
00:13:27.283  killing process with pid 111504
00:13:27.283  Received shutdown signal, test time was about 2.305163 seconds
00:13:27.283  
00:13:27.283                                                                                                  Latency(us)
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:27.283  
[2024-12-13T23:46:58.015Z]  ===================================================================================================================
00:13:27.283  
[2024-12-13T23:46:58.015Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:27.283   23:46:57	-- common/autotest_common.sh@955 -- # kill 111504
00:13:27.283   23:46:57	-- common/autotest_common.sh@960 -- # wait 111504
00:13:28.662   23:46:59	-- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT
00:13:28.662  
00:13:28.662  real	0m4.648s
00:13:28.662  user	0m8.714s
00:13:28.662  sys	0m0.377s
00:13:28.662  ************************************
00:13:28.662  END TEST bdev_stat
00:13:28.662  ************************************
00:13:28.662   23:46:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:28.662   23:46:59	-- common/autotest_common.sh@10 -- # set +x
00:13:28.662   23:46:59	-- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]]
00:13:28.662   23:46:59	-- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]]
00:13:28.662   23:46:59	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:13:28.662   23:46:59	-- bdev/blockdev.sh@809 -- # cleanup
00:13:28.662   23:46:59	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:13:28.662   23:46:59	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:28.662   23:46:59	-- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]]
00:13:28.662   23:46:59	-- bdev/blockdev.sh@28 -- # [[ bdev == daos ]]
00:13:28.662   23:46:59	-- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]]
00:13:28.662   23:46:59	-- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]]
00:13:28.662  ************************************
00:13:28.662  END TEST blockdev_general
00:13:28.662  ************************************
00:13:28.662  
00:13:28.662  real	2m19.752s
00:13:28.662  user	5m45.117s
00:13:28.662  sys	0m20.393s
00:13:28.662   23:46:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:28.662   23:46:59	-- common/autotest_common.sh@10 -- # set +x
00:13:28.662   23:46:59	-- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:13:28.662   23:46:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:28.662   23:46:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:28.662   23:46:59	-- common/autotest_common.sh@10 -- # set +x
00:13:28.662  ************************************
00:13:28.662  START TEST bdev_raid
00:13:28.662  ************************************
00:13:28.662   23:46:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:13:28.662  * Looking for test storage...
00:13:28.662  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:13:28.662    23:46:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:28.662     23:46:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:28.662     23:46:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:28.662    23:46:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:28.662    23:46:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:28.662    23:46:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:28.662    23:46:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:28.662    23:46:59	-- scripts/common.sh@335 -- # IFS=.-:
00:13:28.662    23:46:59	-- scripts/common.sh@335 -- # read -ra ver1
00:13:28.662    23:46:59	-- scripts/common.sh@336 -- # IFS=.-:
00:13:28.662    23:46:59	-- scripts/common.sh@336 -- # read -ra ver2
00:13:28.662    23:46:59	-- scripts/common.sh@337 -- # local 'op=<'
00:13:28.662    23:46:59	-- scripts/common.sh@339 -- # ver1_l=2
00:13:28.662    23:46:59	-- scripts/common.sh@340 -- # ver2_l=1
00:13:28.662    23:46:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:28.662    23:46:59	-- scripts/common.sh@343 -- # case "$op" in
00:13:28.662    23:46:59	-- scripts/common.sh@344 -- # : 1
00:13:28.662    23:46:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:28.662    23:46:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:28.662     23:46:59	-- scripts/common.sh@364 -- # decimal 1
00:13:28.662     23:46:59	-- scripts/common.sh@352 -- # local d=1
00:13:28.662     23:46:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:28.662     23:46:59	-- scripts/common.sh@354 -- # echo 1
00:13:28.662    23:46:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:28.662     23:46:59	-- scripts/common.sh@365 -- # decimal 2
00:13:28.662     23:46:59	-- scripts/common.sh@352 -- # local d=2
00:13:28.662     23:46:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:28.662     23:46:59	-- scripts/common.sh@354 -- # echo 2
00:13:28.662    23:46:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:28.662    23:46:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:28.662    23:46:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:28.662    23:46:59	-- scripts/common.sh@367 -- # return 0
00:13:28.662    23:46:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:28.662    23:46:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:28.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.662  		--rc genhtml_branch_coverage=1
00:13:28.662  		--rc genhtml_function_coverage=1
00:13:28.662  		--rc genhtml_legend=1
00:13:28.662  		--rc geninfo_all_blocks=1
00:13:28.662  		--rc geninfo_unexecuted_blocks=1
00:13:28.662  		
00:13:28.662  		'
00:13:28.662    23:46:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:28.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.662  		--rc genhtml_branch_coverage=1
00:13:28.662  		--rc genhtml_function_coverage=1
00:13:28.662  		--rc genhtml_legend=1
00:13:28.662  		--rc geninfo_all_blocks=1
00:13:28.662  		--rc geninfo_unexecuted_blocks=1
00:13:28.662  		
00:13:28.662  		'
00:13:28.662    23:46:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:28.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.662  		--rc genhtml_branch_coverage=1
00:13:28.662  		--rc genhtml_function_coverage=1
00:13:28.662  		--rc genhtml_legend=1
00:13:28.662  		--rc geninfo_all_blocks=1
00:13:28.662  		--rc geninfo_unexecuted_blocks=1
00:13:28.662  		
00:13:28.662  		'
00:13:28.662    23:46:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:28.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.662  		--rc genhtml_branch_coverage=1
00:13:28.662  		--rc genhtml_function_coverage=1
00:13:28.662  		--rc genhtml_legend=1
00:13:28.662  		--rc geninfo_all_blocks=1
00:13:28.662  		--rc geninfo_unexecuted_blocks=1
00:13:28.662  		
00:13:28.662  		'
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:13:28.662    23:46:59	-- bdev/nbd_common.sh@6 -- # set -e
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock'
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR
00:13:28.662    23:46:59	-- bdev/bdev_raid.sh@716 -- # uname -s
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']'
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@716 -- # modprobe -n nbd
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@717 -- # has_nbd=true
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@718 -- # modprobe nbd
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0
00:13:28.662   23:46:59	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:28.662   23:46:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:28.662   23:46:59	-- common/autotest_common.sh@10 -- # set +x
00:13:28.662  ************************************
00:13:28.662  START TEST raid_function_test_raid0
00:13:28.662  ************************************
00:13:28.662   23:46:59	-- common/autotest_common.sh@1114 -- # raid_function_test raid0
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@81 -- # local raid_level=raid0
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@83 -- # local raid_bdev
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@86 -- # raid_pid=111667
00:13:28.662  Process raid pid: 111667
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111667'
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:28.662   23:46:59	-- bdev/bdev_raid.sh@88 -- # waitforlisten 111667 /var/tmp/spdk-raid.sock
00:13:28.662   23:46:59	-- common/autotest_common.sh@829 -- # '[' -z 111667 ']'
00:13:28.662   23:46:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:28.662   23:46:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:28.662  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:28.662   23:46:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:28.662   23:46:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:28.662   23:46:59	-- common/autotest_common.sh@10 -- # set +x
00:13:28.922  [2024-12-13 23:46:59.451650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:28.922  [2024-12-13 23:46:59.451804] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:28.922  [2024-12-13 23:46:59.608787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:29.180  [2024-12-13 23:46:59.867801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:29.439  [2024-12-13 23:47:00.066146] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:29.698   23:47:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:29.698   23:47:00	-- common/autotest_common.sh@862 -- # return 0
00:13:29.698   23:47:00	-- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0
00:13:29.698   23:47:00	-- bdev/bdev_raid.sh@67 -- # local raid_level=raid0
00:13:29.698   23:47:00	-- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:29.698   23:47:00	-- bdev/bdev_raid.sh@70 -- # cat
00:13:29.698   23:47:00	-- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock
00:13:29.956  [2024-12-13 23:47:00.550514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:29.956  [2024-12-13 23:47:00.552562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:29.956  [2024-12-13 23:47:00.552644] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:13:29.956  [2024-12-13 23:47:00.552657] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:29.956  [2024-12-13 23:47:00.552782] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0
00:13:29.956  [2024-12-13 23:47:00.553151] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:13:29.956  [2024-12-13 23:47:00.553176] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80
00:13:29.956  [2024-12-13 23:47:00.553313] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:29.956  Base_1
00:13:29.956  Base_2
00:13:29.956   23:47:00	-- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:29.956    23:47:00	-- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online
00:13:29.956    23:47:00	-- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)'
00:13:30.215   23:47:00	-- bdev/bdev_raid.sh@91 -- # raid_bdev=raid
00:13:30.215   23:47:00	-- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']'
00:13:30.215   23:47:00	-- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@12 -- # local i
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:30.215   23:47:00	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0
00:13:30.474  [2024-12-13 23:47:01.023051] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:13:30.474  /dev/nbd0
00:13:30.474    23:47:01	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:30.474   23:47:01	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:30.474   23:47:01	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:13:30.474   23:47:01	-- common/autotest_common.sh@867 -- # local i
00:13:30.474   23:47:01	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:13:30.474   23:47:01	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:13:30.474   23:47:01	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:13:30.474   23:47:01	-- common/autotest_common.sh@871 -- # break
00:13:30.474   23:47:01	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:13:30.474   23:47:01	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:13:30.474   23:47:01	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:30.474  1+0 records in
00:13:30.474  1+0 records out
00:13:30.474  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450741 s, 9.1 MB/s
00:13:30.474    23:47:01	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:30.474   23:47:01	-- common/autotest_common.sh@884 -- # size=4096
00:13:30.474   23:47:01	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:30.474   23:47:01	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:13:30.474   23:47:01	-- common/autotest_common.sh@887 -- # return 0
00:13:30.474   23:47:01	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:30.474   23:47:01	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:30.474    23:47:01	-- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:30.474    23:47:01	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:30.474     23:47:01	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:30.733    23:47:01	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:30.733    {
00:13:30.733      "nbd_device": "/dev/nbd0",
00:13:30.733      "bdev_name": "raid"
00:13:30.733    }
00:13:30.733  ]'
00:13:30.733     23:47:01	-- bdev/nbd_common.sh@64 -- # echo '[
00:13:30.733    {
00:13:30.733      "nbd_device": "/dev/nbd0",
00:13:30.733      "bdev_name": "raid"
00:13:30.733    }
00:13:30.733  ]'
00:13:30.733     23:47:01	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:30.733    23:47:01	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:13:30.733     23:47:01	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:13:30.733     23:47:01	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:30.733    23:47:01	-- bdev/nbd_common.sh@65 -- # count=1
00:13:30.733    23:47:01	-- bdev/nbd_common.sh@66 -- # echo 1
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@98 -- # count=1
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']'
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@20 -- # local blksize
00:13:30.733    23:47:01	-- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0
00:13:30.733    23:47:01	-- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC
00:13:30.733    23:47:01	-- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@21 -- # blksize=512
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@23 -- # local rw_len=2097152
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321')
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456')
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@26 -- # local unmap_off
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@27 -- # local unmap_len
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096
00:13:30.733  4096+0 records in
00:13:30.733  4096+0 records out
00:13:30.733  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0304728 s, 68.8 MB/s
00:13:30.733   23:47:01	-- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:13:30.992  4096+0 records in
00:13:30.992  4096+0 records out
00:13:30.992  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237606 s, 8.8 MB/s
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i = 0 ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@38 -- # unmap_off=0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@39 -- # unmap_len=65536
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:13:30.992  128+0 records in
00:13:30.992  128+0 records out
00:13:30.992  65536 bytes (66 kB, 64 KiB) copied, 0.000322538 s, 203 MB/s
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@38 -- # unmap_off=526336
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@39 -- # unmap_len=1041920
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:13:30.992  2035+0 records in
00:13:30.992  2035+0 records out
00:13:30.992  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00761393 s, 137 MB/s
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@38 -- # unmap_off=164352
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@39 -- # unmap_len=233472
00:13:30.992   23:47:01	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:13:31.251  456+0 records in
00:13:31.251  456+0 records out
00:13:31.251  233472 bytes (233 kB, 228 KiB) copied, 0.00165483 s, 141 MB/s
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@53 -- # return 0
00:13:31.251   23:47:01	-- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@51 -- # local i
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:31.251   23:47:01	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:13:31.510    23:47:02	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:31.510  [2024-12-13 23:47:02.023052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@41 -- # break
00:13:31.510   23:47:02	-- bdev/nbd_common.sh@45 -- # return 0
00:13:31.510    23:47:02	-- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:31.510    23:47:02	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:31.510     23:47:02	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:31.510    23:47:02	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:31.510     23:47:02	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:31.510     23:47:02	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:31.769    23:47:02	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:31.769     23:47:02	-- bdev/nbd_common.sh@65 -- # echo ''
00:13:31.769     23:47:02	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:31.769     23:47:02	-- bdev/nbd_common.sh@65 -- # true
00:13:31.769    23:47:02	-- bdev/nbd_common.sh@65 -- # count=0
00:13:31.769    23:47:02	-- bdev/nbd_common.sh@66 -- # echo 0
00:13:31.769   23:47:02	-- bdev/bdev_raid.sh@106 -- # count=0
00:13:31.769   23:47:02	-- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']'
00:13:31.769   23:47:02	-- bdev/bdev_raid.sh@111 -- # killprocess 111667
00:13:31.769   23:47:02	-- common/autotest_common.sh@936 -- # '[' -z 111667 ']'
00:13:31.769   23:47:02	-- common/autotest_common.sh@940 -- # kill -0 111667
00:13:31.769    23:47:02	-- common/autotest_common.sh@941 -- # uname
00:13:31.769   23:47:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:31.769    23:47:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111667
00:13:31.769   23:47:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:31.769   23:47:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:31.769  killing process with pid 111667
00:13:31.769   23:47:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111667'
00:13:31.769   23:47:02	-- common/autotest_common.sh@955 -- # kill 111667
00:13:31.769  [2024-12-13 23:47:02.297749] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:31.769   23:47:02	-- common/autotest_common.sh@960 -- # wait 111667
00:13:31.769  [2024-12-13 23:47:02.297851] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:31.769  [2024-12-13 23:47:02.297909] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:31.769  [2024-12-13 23:47:02.297920] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline
00:13:31.769  [2024-12-13 23:47:02.435911] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@113 -- # return 0
00:13:33.147  
00:13:33.147  real	0m4.071s
00:13:33.147  user	0m5.113s
00:13:33.147  sys	0m0.940s
00:13:33.147   23:47:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:33.147   23:47:03	-- common/autotest_common.sh@10 -- # set +x
00:13:33.147  ************************************
00:13:33.147  END TEST raid_function_test_raid0
00:13:33.147  ************************************
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat
00:13:33.147   23:47:03	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:33.147   23:47:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:33.147   23:47:03	-- common/autotest_common.sh@10 -- # set +x
00:13:33.147  ************************************
00:13:33.147  START TEST raid_function_test_concat
00:13:33.147  ************************************
00:13:33.147   23:47:03	-- common/autotest_common.sh@1114 -- # raid_function_test concat
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@81 -- # local raid_level=concat
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@83 -- # local raid_bdev
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@86 -- # raid_pid=111821
00:13:33.147  Process raid pid: 111821
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111821'
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:33.147   23:47:03	-- bdev/bdev_raid.sh@88 -- # waitforlisten 111821 /var/tmp/spdk-raid.sock
00:13:33.147   23:47:03	-- common/autotest_common.sh@829 -- # '[' -z 111821 ']'
00:13:33.147   23:47:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:33.147   23:47:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:33.147  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:33.147   23:47:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:33.147   23:47:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:33.147   23:47:03	-- common/autotest_common.sh@10 -- # set +x
00:13:33.147  [2024-12-13 23:47:03.579871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:33.147  [2024-12-13 23:47:03.580095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:33.147  [2024-12-13 23:47:03.756572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:33.406  [2024-12-13 23:47:03.983012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:33.664  [2024-12-13 23:47:04.224837] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:33.923   23:47:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:33.923   23:47:04	-- common/autotest_common.sh@862 -- # return 0
00:13:33.923   23:47:04	-- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat
00:13:33.923   23:47:04	-- bdev/bdev_raid.sh@67 -- # local raid_level=concat
00:13:33.923   23:47:04	-- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:33.923   23:47:04	-- bdev/bdev_raid.sh@70 -- # cat
00:13:33.923   23:47:04	-- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock
00:13:34.183  [2024-12-13 23:47:04.792048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:34.183  [2024-12-13 23:47:04.794069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:34.183  [2024-12-13 23:47:04.794262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:13:34.183  [2024-12-13 23:47:04.794369] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:34.183  [2024-12-13 23:47:04.794593] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0
00:13:34.183  [2024-12-13 23:47:04.795023] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:13:34.183  [2024-12-13 23:47:04.795177] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80
00:13:34.183  [2024-12-13 23:47:04.795410] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:34.183  Base_1
00:13:34.183  Base_2
00:13:34.183   23:47:04	-- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:34.183    23:47:04	-- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online
00:13:34.183    23:47:04	-- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)'
00:13:34.441   23:47:05	-- bdev/bdev_raid.sh@91 -- # raid_bdev=raid
00:13:34.441   23:47:05	-- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']'
00:13:34.441   23:47:05	-- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@12 -- # local i
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:34.441   23:47:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0
00:13:34.700  [2024-12-13 23:47:05.244070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:13:34.700  /dev/nbd0
00:13:34.700    23:47:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:34.700   23:47:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:34.700   23:47:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:13:34.700   23:47:05	-- common/autotest_common.sh@867 -- # local i
00:13:34.700   23:47:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:13:34.700   23:47:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:13:34.700   23:47:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:13:34.700   23:47:05	-- common/autotest_common.sh@871 -- # break
00:13:34.700   23:47:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:13:34.700   23:47:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:13:34.700   23:47:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:34.700  1+0 records in
00:13:34.700  1+0 records out
00:13:34.700  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302735 s, 13.5 MB/s
00:13:34.700    23:47:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:34.700   23:47:05	-- common/autotest_common.sh@884 -- # size=4096
00:13:34.700   23:47:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:34.700   23:47:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:13:34.700   23:47:05	-- common/autotest_common.sh@887 -- # return 0
00:13:34.700   23:47:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:34.700   23:47:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:34.700    23:47:05	-- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:34.700    23:47:05	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:34.700     23:47:05	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:34.958    23:47:05	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:34.958    {
00:13:34.958      "nbd_device": "/dev/nbd0",
00:13:34.958      "bdev_name": "raid"
00:13:34.958    }
00:13:34.958  ]'
00:13:34.958     23:47:05	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:34.958     23:47:05	-- bdev/nbd_common.sh@64 -- # echo '[
00:13:34.958    {
00:13:34.958      "nbd_device": "/dev/nbd0",
00:13:34.958      "bdev_name": "raid"
00:13:34.958    }
00:13:34.958  ]'
00:13:34.958    23:47:05	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:13:34.958     23:47:05	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:34.958     23:47:05	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:13:34.958    23:47:05	-- bdev/nbd_common.sh@65 -- # count=1
00:13:34.958    23:47:05	-- bdev/nbd_common.sh@66 -- # echo 1
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@98 -- # count=1
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']'
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@20 -- # local blksize
00:13:34.958    23:47:05	-- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0
00:13:34.958    23:47:05	-- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC
00:13:34.958    23:47:05	-- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@21 -- # blksize=512
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@23 -- # local rw_len=2097152
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321')
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456')
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@26 -- # local unmap_off
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@27 -- # local unmap_len
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096
00:13:34.958  4096+0 records in
00:13:34.958  4096+0 records out
00:13:34.958  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0263144 s, 79.7 MB/s
00:13:34.958   23:47:05	-- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:13:35.217  4096+0 records in
00:13:35.217  4096+0 records out
00:13:35.217  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.264204 s, 7.9 MB/s
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i = 0 ))
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@38 -- # unmap_off=0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@39 -- # unmap_len=65536
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:13:35.217  128+0 records in
00:13:35.217  128+0 records out
00:13:35.217  65536 bytes (66 kB, 64 KiB) copied, 0.000620447 s, 106 MB/s
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@38 -- # unmap_off=526336
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@39 -- # unmap_len=1041920
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:13:35.217  2035+0 records in
00:13:35.217  2035+0 records out
00:13:35.217  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00738234 s, 141 MB/s
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:35.217   23:47:05	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@38 -- # unmap_off=164352
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@39 -- # unmap_len=233472
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:13:35.476  456+0 records in
00:13:35.476  456+0 records out
00:13:35.476  233472 bytes (233 kB, 228 KiB) copied, 0.00199069 s, 117 MB/s
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@53 -- # return 0
00:13:35.476   23:47:05	-- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@51 -- # local i
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:35.476   23:47:05	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:13:35.476    23:47:06	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:35.476   23:47:06	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:35.476   23:47:06	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:35.476   23:47:06	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:35.476   23:47:06	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:35.476  [2024-12-13 23:47:06.208422] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:35.476   23:47:06	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:35.735   23:47:06	-- bdev/nbd_common.sh@41 -- # break
00:13:35.735   23:47:06	-- bdev/nbd_common.sh@45 -- # return 0
00:13:35.735    23:47:06	-- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:35.735    23:47:06	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:35.735     23:47:06	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:35.994    23:47:06	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:35.994     23:47:06	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:35.994     23:47:06	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:35.994    23:47:06	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:35.994     23:47:06	-- bdev/nbd_common.sh@65 -- # echo ''
00:13:35.994     23:47:06	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:35.994     23:47:06	-- bdev/nbd_common.sh@65 -- # true
00:13:35.994    23:47:06	-- bdev/nbd_common.sh@65 -- # count=0
00:13:35.994    23:47:06	-- bdev/nbd_common.sh@66 -- # echo 0
00:13:35.994   23:47:06	-- bdev/bdev_raid.sh@106 -- # count=0
00:13:35.994   23:47:06	-- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']'
00:13:35.994   23:47:06	-- bdev/bdev_raid.sh@111 -- # killprocess 111821
00:13:35.994   23:47:06	-- common/autotest_common.sh@936 -- # '[' -z 111821 ']'
00:13:35.994   23:47:06	-- common/autotest_common.sh@940 -- # kill -0 111821
00:13:35.994    23:47:06	-- common/autotest_common.sh@941 -- # uname
00:13:35.994   23:47:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:35.994    23:47:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111821
00:13:35.994   23:47:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:35.994  killing process with pid 111821
00:13:35.994   23:47:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:35.994   23:47:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111821'
00:13:35.994   23:47:06	-- common/autotest_common.sh@955 -- # kill 111821
00:13:35.994  [2024-12-13 23:47:06.545013] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:35.994   23:47:06	-- common/autotest_common.sh@960 -- # wait 111821
00:13:35.994  [2024-12-13 23:47:06.545103] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:35.994  [2024-12-13 23:47:06.545162] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:35.994  [2024-12-13 23:47:06.545173] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline
00:13:35.994  [2024-12-13 23:47:06.680635] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@113 -- # return 0
00:13:37.371  
00:13:37.371  real	0m4.193s
00:13:37.371  user	0m5.267s
00:13:37.371  sys	0m0.985s
00:13:37.371   23:47:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:37.371   23:47:07	-- common/autotest_common.sh@10 -- # set +x
00:13:37.371  ************************************
00:13:37.371  END TEST raid_function_test_concat
00:13:37.371  ************************************
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test
00:13:37.371   23:47:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:37.371   23:47:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:37.371   23:47:07	-- common/autotest_common.sh@10 -- # set +x
00:13:37.371  ************************************
00:13:37.371  START TEST raid0_resize_test
00:13:37.371  ************************************
00:13:37.371   23:47:07	-- common/autotest_common.sh@1114 -- # raid0_resize_test
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@293 -- # local blksize=512
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@296 -- # local blkcnt
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@297 -- # local raid_size_mb
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@301 -- # raid_pid=111979
00:13:37.371  Process raid pid: 111979
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 111979'
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@303 -- # waitforlisten 111979 /var/tmp/spdk-raid.sock
00:13:37.371   23:47:07	-- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:37.371   23:47:07	-- common/autotest_common.sh@829 -- # '[' -z 111979 ']'
00:13:37.371   23:47:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:37.371   23:47:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:37.371  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:37.371   23:47:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:37.371   23:47:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:37.371   23:47:07	-- common/autotest_common.sh@10 -- # set +x
00:13:37.371  [2024-12-13 23:47:07.837333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:37.371  [2024-12-13 23:47:07.837562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:37.371  [2024-12-13 23:47:08.013086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:37.630  [2024-12-13 23:47:08.264487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:37.889  [2024-12-13 23:47:08.452841] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:38.150   23:47:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:38.150   23:47:08	-- common/autotest_common.sh@862 -- # return 0
00:13:38.150   23:47:08	-- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512
00:13:38.445  Base_1
00:13:38.445   23:47:08	-- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512
00:13:38.706  Base_2
00:13:38.706   23:47:09	-- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid
00:13:38.706  [2024-12-13 23:47:09.413528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:38.706  [2024-12-13 23:47:09.415377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:38.706  [2024-12-13 23:47:09.415442] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:13:38.706  [2024-12-13 23:47:09.415453] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:38.706  [2024-12-13 23:47:09.415579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450
00:13:38.706  [2024-12-13 23:47:09.415860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:13:38.706  [2024-12-13 23:47:09.415873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80
00:13:38.706  [2024-12-13 23:47:09.416015] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:38.706   23:47:09	-- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64
00:13:38.965  [2024-12-13 23:47:09.629561] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:13:38.965  [2024-12-13 23:47:09.629601] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072
00:13:38.965  true
00:13:38.965    23:47:09	-- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid
00:13:38.965    23:47:09	-- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks'
00:13:39.223  [2024-12-13 23:47:09.857784] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:39.223   23:47:09	-- bdev/bdev_raid.sh@314 -- # blkcnt=131072
00:13:39.223   23:47:09	-- bdev/bdev_raid.sh@315 -- # raid_size_mb=64
00:13:39.223   23:47:09	-- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']'
00:13:39.223   23:47:09	-- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64
00:13:39.482  [2024-12-13 23:47:10.117618] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:13:39.482  [2024-12-13 23:47:10.117644] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072
00:13:39.482  [2024-12-13 23:47:10.117686] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144
00:13:39.482  [2024-12-13 23:47:10.117740] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:13:39.482  true
00:13:39.482    23:47:10	-- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid
00:13:39.482    23:47:10	-- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks'
00:13:39.741  [2024-12-13 23:47:10.349767] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:39.741   23:47:10	-- bdev/bdev_raid.sh@325 -- # blkcnt=262144
00:13:39.741   23:47:10	-- bdev/bdev_raid.sh@326 -- # raid_size_mb=128
00:13:39.741   23:47:10	-- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']'
00:13:39.741   23:47:10	-- bdev/bdev_raid.sh@332 -- # killprocess 111979
00:13:39.741   23:47:10	-- common/autotest_common.sh@936 -- # '[' -z 111979 ']'
00:13:39.741   23:47:10	-- common/autotest_common.sh@940 -- # kill -0 111979
00:13:39.741    23:47:10	-- common/autotest_common.sh@941 -- # uname
00:13:39.741   23:47:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:39.741    23:47:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111979
00:13:39.741   23:47:10	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:39.741   23:47:10	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:39.741  killing process with pid 111979
00:13:39.741   23:47:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 111979'
00:13:39.741   23:47:10	-- common/autotest_common.sh@955 -- # kill 111979
00:13:39.741   23:47:10	-- common/autotest_common.sh@960 -- # wait 111979
00:13:39.741  [2024-12-13 23:47:10.388255] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:39.741  [2024-12-13 23:47:10.388341] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:39.741  [2024-12-13 23:47:10.388395] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:39.741  [2024-12-13 23:47:10.388406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline
00:13:39.741  [2024-12-13 23:47:10.388962] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:40.678   23:47:11	-- bdev/bdev_raid.sh@334 -- # return 0
00:13:40.678  
00:13:40.678  real	0m3.545s
00:13:40.678  user	0m5.100s
00:13:40.678  sys	0m0.508s
00:13:40.678  ************************************
00:13:40.678  END TEST raid0_resize_test
00:13:40.678  ************************************
00:13:40.678   23:47:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:40.678   23:47:11	-- common/autotest_common.sh@10 -- # set +x
00:13:40.678   23:47:11	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:13:40.678   23:47:11	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:13:40.678   23:47:11	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false
00:13:40.679   23:47:11	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:40.679   23:47:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:40.679   23:47:11	-- common/autotest_common.sh@10 -- # set +x
00:13:40.679  ************************************
00:13:40.679  START TEST raid_state_function_test
00:13:40.679  ************************************
00:13:40.679   23:47:11	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:40.679    23:47:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@226 -- # raid_pid=112061
00:13:40.679  Process raid pid: 112061
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112061'
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@228 -- # waitforlisten 112061 /var/tmp/spdk-raid.sock
00:13:40.679   23:47:11	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:40.679   23:47:11	-- common/autotest_common.sh@829 -- # '[' -z 112061 ']'
00:13:40.679  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:40.679   23:47:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:40.679   23:47:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:40.679   23:47:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:40.679   23:47:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:40.679   23:47:11	-- common/autotest_common.sh@10 -- # set +x
00:13:40.938  [2024-12-13 23:47:11.446501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:40.938  [2024-12-13 23:47:11.446708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:40.938  [2024-12-13 23:47:11.621378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:41.197  [2024-12-13 23:47:11.848699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:41.455  [2024-12-13 23:47:12.018917] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:41.732   23:47:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:41.732   23:47:12	-- common/autotest_common.sh@862 -- # return 0
00:13:41.732   23:47:12	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:41.990  [2024-12-13 23:47:12.585445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:41.990  [2024-12-13 23:47:12.585524] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:41.990  [2024-12-13 23:47:12.585537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:41.990  [2024-12-13 23:47:12.585555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:41.990   23:47:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:41.990    23:47:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:41.990    23:47:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:42.248   23:47:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:42.248    "name": "Existed_Raid",
00:13:42.248    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:42.248    "strip_size_kb": 64,
00:13:42.248    "state": "configuring",
00:13:42.248    "raid_level": "raid0",
00:13:42.248    "superblock": false,
00:13:42.248    "num_base_bdevs": 2,
00:13:42.248    "num_base_bdevs_discovered": 0,
00:13:42.248    "num_base_bdevs_operational": 2,
00:13:42.248    "base_bdevs_list": [
00:13:42.248      {
00:13:42.248        "name": "BaseBdev1",
00:13:42.248        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:42.248        "is_configured": false,
00:13:42.248        "data_offset": 0,
00:13:42.248        "data_size": 0
00:13:42.248      },
00:13:42.248      {
00:13:42.248        "name": "BaseBdev2",
00:13:42.248        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:42.248        "is_configured": false,
00:13:42.248        "data_offset": 0,
00:13:42.248        "data_size": 0
00:13:42.248      }
00:13:42.248    ]
00:13:42.248  }'
00:13:42.248   23:47:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:42.248   23:47:12	-- common/autotest_common.sh@10 -- # set +x
00:13:42.815   23:47:13	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:43.073  [2024-12-13 23:47:13.653538] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:43.073  [2024-12-13 23:47:13.653607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:13:43.073   23:47:13	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:43.331  [2024-12-13 23:47:13.833556] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:43.331  [2024-12-13 23:47:13.833650] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:43.331  [2024-12-13 23:47:13.833663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:43.331  [2024-12-13 23:47:13.833686] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:43.331   23:47:13	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:43.331  [2024-12-13 23:47:14.044686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:43.331  BaseBdev1
00:13:43.331   23:47:14	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:13:43.331   23:47:14	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:43.331   23:47:14	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:43.331   23:47:14	-- common/autotest_common.sh@899 -- # local i
00:13:43.331   23:47:14	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:43.331   23:47:14	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:43.331   23:47:14	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:43.590   23:47:14	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:43.848  [
00:13:43.848    {
00:13:43.848      "name": "BaseBdev1",
00:13:43.848      "aliases": [
00:13:43.848        "421b1a6c-17b6-4b14-8ecf-fb3bad2d0411"
00:13:43.848      ],
00:13:43.848      "product_name": "Malloc disk",
00:13:43.848      "block_size": 512,
00:13:43.848      "num_blocks": 65536,
00:13:43.848      "uuid": "421b1a6c-17b6-4b14-8ecf-fb3bad2d0411",
00:13:43.848      "assigned_rate_limits": {
00:13:43.848        "rw_ios_per_sec": 0,
00:13:43.848        "rw_mbytes_per_sec": 0,
00:13:43.848        "r_mbytes_per_sec": 0,
00:13:43.848        "w_mbytes_per_sec": 0
00:13:43.848      },
00:13:43.848      "claimed": true,
00:13:43.848      "claim_type": "exclusive_write",
00:13:43.848      "zoned": false,
00:13:43.848      "supported_io_types": {
00:13:43.848        "read": true,
00:13:43.848        "write": true,
00:13:43.848        "unmap": true,
00:13:43.848        "write_zeroes": true,
00:13:43.848        "flush": true,
00:13:43.848        "reset": true,
00:13:43.848        "compare": false,
00:13:43.848        "compare_and_write": false,
00:13:43.848        "abort": true,
00:13:43.848        "nvme_admin": false,
00:13:43.848        "nvme_io": false
00:13:43.848      },
00:13:43.848      "memory_domains": [
00:13:43.848        {
00:13:43.848          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:43.848          "dma_device_type": 2
00:13:43.848        }
00:13:43.848      ],
00:13:43.848      "driver_specific": {}
00:13:43.848    }
00:13:43.848  ]
00:13:43.848   23:47:14	-- common/autotest_common.sh@905 -- # return 0
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:43.848   23:47:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:43.848    23:47:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:43.848    23:47:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:44.106   23:47:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:44.106    "name": "Existed_Raid",
00:13:44.106    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:44.106    "strip_size_kb": 64,
00:13:44.106    "state": "configuring",
00:13:44.106    "raid_level": "raid0",
00:13:44.106    "superblock": false,
00:13:44.106    "num_base_bdevs": 2,
00:13:44.106    "num_base_bdevs_discovered": 1,
00:13:44.106    "num_base_bdevs_operational": 2,
00:13:44.106    "base_bdevs_list": [
00:13:44.106      {
00:13:44.106        "name": "BaseBdev1",
00:13:44.106        "uuid": "421b1a6c-17b6-4b14-8ecf-fb3bad2d0411",
00:13:44.106        "is_configured": true,
00:13:44.106        "data_offset": 0,
00:13:44.106        "data_size": 65536
00:13:44.106      },
00:13:44.106      {
00:13:44.106        "name": "BaseBdev2",
00:13:44.106        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:44.106        "is_configured": false,
00:13:44.106        "data_offset": 0,
00:13:44.106        "data_size": 0
00:13:44.106      }
00:13:44.106    ]
00:13:44.106  }'
00:13:44.106   23:47:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:44.106   23:47:14	-- common/autotest_common.sh@10 -- # set +x
00:13:44.672   23:47:15	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:44.929  [2024-12-13 23:47:15.465614] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:44.929  [2024-12-13 23:47:15.465670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:13:44.929   23:47:15	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:13:44.929   23:47:15	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:45.188  [2024-12-13 23:47:15.733722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:45.188  [2024-12-13 23:47:15.735547] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:45.188  [2024-12-13 23:47:15.735602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:45.188   23:47:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:45.188    23:47:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:45.188    23:47:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:45.447   23:47:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:45.447    "name": "Existed_Raid",
00:13:45.447    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:45.447    "strip_size_kb": 64,
00:13:45.447    "state": "configuring",
00:13:45.447    "raid_level": "raid0",
00:13:45.447    "superblock": false,
00:13:45.447    "num_base_bdevs": 2,
00:13:45.447    "num_base_bdevs_discovered": 1,
00:13:45.447    "num_base_bdevs_operational": 2,
00:13:45.447    "base_bdevs_list": [
00:13:45.447      {
00:13:45.447        "name": "BaseBdev1",
00:13:45.447        "uuid": "421b1a6c-17b6-4b14-8ecf-fb3bad2d0411",
00:13:45.447        "is_configured": true,
00:13:45.447        "data_offset": 0,
00:13:45.447        "data_size": 65536
00:13:45.447      },
00:13:45.447      {
00:13:45.447        "name": "BaseBdev2",
00:13:45.447        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:45.447        "is_configured": false,
00:13:45.447        "data_offset": 0,
00:13:45.447        "data_size": 0
00:13:45.447      }
00:13:45.447    ]
00:13:45.447  }'
00:13:45.447   23:47:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:45.447   23:47:15	-- common/autotest_common.sh@10 -- # set +x
00:13:46.013   23:47:16	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:13:46.272  [2024-12-13 23:47:16.902234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:46.272  [2024-12-13 23:47:16.902276] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:13:46.272  [2024-12-13 23:47:16.902284] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:46.272  [2024-12-13 23:47:16.902399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0
00:13:46.272  [2024-12-13 23:47:16.902759] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:13:46.272  [2024-12-13 23:47:16.902774] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:13:46.272  [2024-12-13 23:47:16.903049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:46.272  BaseBdev2
00:13:46.272   23:47:16	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:13:46.272   23:47:16	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:13:46.272   23:47:16	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:46.272   23:47:16	-- common/autotest_common.sh@899 -- # local i
00:13:46.272   23:47:16	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:46.272   23:47:16	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:46.272   23:47:16	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:46.530   23:47:17	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:13:46.788  [
00:13:46.788    {
00:13:46.788      "name": "BaseBdev2",
00:13:46.788      "aliases": [
00:13:46.788        "c5c20648-6926-487c-a686-eeb2a6d98b59"
00:13:46.788      ],
00:13:46.788      "product_name": "Malloc disk",
00:13:46.788      "block_size": 512,
00:13:46.788      "num_blocks": 65536,
00:13:46.788      "uuid": "c5c20648-6926-487c-a686-eeb2a6d98b59",
00:13:46.788      "assigned_rate_limits": {
00:13:46.788        "rw_ios_per_sec": 0,
00:13:46.788        "rw_mbytes_per_sec": 0,
00:13:46.788        "r_mbytes_per_sec": 0,
00:13:46.788        "w_mbytes_per_sec": 0
00:13:46.788      },
00:13:46.788      "claimed": true,
00:13:46.788      "claim_type": "exclusive_write",
00:13:46.788      "zoned": false,
00:13:46.788      "supported_io_types": {
00:13:46.788        "read": true,
00:13:46.788        "write": true,
00:13:46.788        "unmap": true,
00:13:46.788        "write_zeroes": true,
00:13:46.788        "flush": true,
00:13:46.788        "reset": true,
00:13:46.788        "compare": false,
00:13:46.788        "compare_and_write": false,
00:13:46.788        "abort": true,
00:13:46.788        "nvme_admin": false,
00:13:46.788        "nvme_io": false
00:13:46.788      },
00:13:46.788      "memory_domains": [
00:13:46.788        {
00:13:46.788          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:46.788          "dma_device_type": 2
00:13:46.788        }
00:13:46.788      ],
00:13:46.788      "driver_specific": {}
00:13:46.788    }
00:13:46.788  ]
00:13:46.788   23:47:17	-- common/autotest_common.sh@905 -- # return 0
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:46.788   23:47:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:46.788    23:47:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:46.788    23:47:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:47.047   23:47:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:47.047    "name": "Existed_Raid",
00:13:47.047    "uuid": "cac651d9-e6ec-4ef6-9780-7482e3cb7424",
00:13:47.047    "strip_size_kb": 64,
00:13:47.047    "state": "online",
00:13:47.047    "raid_level": "raid0",
00:13:47.047    "superblock": false,
00:13:47.047    "num_base_bdevs": 2,
00:13:47.047    "num_base_bdevs_discovered": 2,
00:13:47.047    "num_base_bdevs_operational": 2,
00:13:47.047    "base_bdevs_list": [
00:13:47.047      {
00:13:47.047        "name": "BaseBdev1",
00:13:47.047        "uuid": "421b1a6c-17b6-4b14-8ecf-fb3bad2d0411",
00:13:47.047        "is_configured": true,
00:13:47.047        "data_offset": 0,
00:13:47.047        "data_size": 65536
00:13:47.047      },
00:13:47.047      {
00:13:47.047        "name": "BaseBdev2",
00:13:47.047        "uuid": "c5c20648-6926-487c-a686-eeb2a6d98b59",
00:13:47.047        "is_configured": true,
00:13:47.047        "data_offset": 0,
00:13:47.047        "data_size": 65536
00:13:47.047      }
00:13:47.047    ]
00:13:47.047  }'
00:13:47.047   23:47:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:47.047   23:47:17	-- common/autotest_common.sh@10 -- # set +x
00:13:47.613   23:47:18	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:47.872  [2024-12-13 23:47:18.457881] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:47.872  [2024-12-13 23:47:18.457907] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:47.872  [2024-12-13 23:47:18.457965] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:47.872   23:47:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:47.872    23:47:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:47.872    23:47:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:48.130   23:47:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:48.130    "name": "Existed_Raid",
00:13:48.130    "uuid": "cac651d9-e6ec-4ef6-9780-7482e3cb7424",
00:13:48.130    "strip_size_kb": 64,
00:13:48.130    "state": "offline",
00:13:48.130    "raid_level": "raid0",
00:13:48.130    "superblock": false,
00:13:48.130    "num_base_bdevs": 2,
00:13:48.130    "num_base_bdevs_discovered": 1,
00:13:48.130    "num_base_bdevs_operational": 1,
00:13:48.130    "base_bdevs_list": [
00:13:48.130      {
00:13:48.130        "name": null,
00:13:48.130        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:48.130        "is_configured": false,
00:13:48.130        "data_offset": 0,
00:13:48.130        "data_size": 65536
00:13:48.130      },
00:13:48.130      {
00:13:48.130        "name": "BaseBdev2",
00:13:48.130        "uuid": "c5c20648-6926-487c-a686-eeb2a6d98b59",
00:13:48.130        "is_configured": true,
00:13:48.130        "data_offset": 0,
00:13:48.130        "data_size": 65536
00:13:48.130      }
00:13:48.130    ]
00:13:48.130  }'
00:13:48.130   23:47:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:48.130   23:47:18	-- common/autotest_common.sh@10 -- # set +x
00:13:48.697   23:47:19	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:13:48.697   23:47:19	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:48.697    23:47:19	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:13:48.697    23:47:19	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:48.955   23:47:19	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:13:48.955   23:47:19	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:13:48.955   23:47:19	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:13:49.214  [2024-12-13 23:47:19.773414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:13:49.214  [2024-12-13 23:47:19.773468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:13:49.214   23:47:19	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:13:49.214   23:47:19	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:49.214    23:47:19	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:49.214    23:47:19	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:13:49.472   23:47:20	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:13:49.472   23:47:20	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:13:49.472   23:47:20	-- bdev/bdev_raid.sh@287 -- # killprocess 112061
00:13:49.472   23:47:20	-- common/autotest_common.sh@936 -- # '[' -z 112061 ']'
00:13:49.472   23:47:20	-- common/autotest_common.sh@940 -- # kill -0 112061
00:13:49.472    23:47:20	-- common/autotest_common.sh@941 -- # uname
00:13:49.472   23:47:20	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:49.472    23:47:20	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112061
00:13:49.472  killing process with pid 112061
00:13:49.472   23:47:20	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:49.472   23:47:20	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:49.472   23:47:20	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 112061'
00:13:49.472   23:47:20	-- common/autotest_common.sh@955 -- # kill 112061
00:13:49.472  [2024-12-13 23:47:20.128669] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:49.472   23:47:20	-- common/autotest_common.sh@960 -- # wait 112061
00:13:49.472  [2024-12-13 23:47:20.128778] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@289 -- # return 0
00:13:50.849  
00:13:50.849  real	0m9.796s
00:13:50.849  user	0m17.050s
00:13:50.849  sys	0m1.086s
00:13:50.849   23:47:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:50.849   23:47:21	-- common/autotest_common.sh@10 -- # set +x
00:13:50.849  ************************************
00:13:50.849  END TEST raid_state_function_test
00:13:50.849  ************************************
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true
00:13:50.849   23:47:21	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:50.849   23:47:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:50.849   23:47:21	-- common/autotest_common.sh@10 -- # set +x
00:13:50.849  ************************************
00:13:50.849  START TEST raid_state_function_test_sb
00:13:50.849  ************************************
00:13:50.849   23:47:21	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:50.849    23:47:21	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@226 -- # raid_pid=112383
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112383'
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:50.849  Process raid pid: 112383
00:13:50.849   23:47:21	-- bdev/bdev_raid.sh@228 -- # waitforlisten 112383 /var/tmp/spdk-raid.sock
00:13:50.849   23:47:21	-- common/autotest_common.sh@829 -- # '[' -z 112383 ']'
00:13:50.849   23:47:21	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:50.849   23:47:21	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:50.849   23:47:21	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:50.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:50.849   23:47:21	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:50.849   23:47:21	-- common/autotest_common.sh@10 -- # set +x
00:13:50.849  [2024-12-13 23:47:21.308343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:50.849  [2024-12-13 23:47:21.308534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:50.849  [2024-12-13 23:47:21.485913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:51.108  [2024-12-13 23:47:21.705280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:51.366  [2024-12-13 23:47:21.896634] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:51.625   23:47:22	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:51.625   23:47:22	-- common/autotest_common.sh@862 -- # return 0
00:13:51.625   23:47:22	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:51.884  [2024-12-13 23:47:22.460580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:51.884  [2024-12-13 23:47:22.460664] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:51.884  [2024-12-13 23:47:22.460678] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:51.884  [2024-12-13 23:47:22.460698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:51.884   23:47:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:51.884    23:47:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:51.884    23:47:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:52.143   23:47:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:52.143    "name": "Existed_Raid",
00:13:52.143    "uuid": "3be3a655-dee9-4134-9eb5-c41dd461931a",
00:13:52.143    "strip_size_kb": 64,
00:13:52.143    "state": "configuring",
00:13:52.143    "raid_level": "raid0",
00:13:52.143    "superblock": true,
00:13:52.143    "num_base_bdevs": 2,
00:13:52.143    "num_base_bdevs_discovered": 0,
00:13:52.143    "num_base_bdevs_operational": 2,
00:13:52.143    "base_bdevs_list": [
00:13:52.143      {
00:13:52.143        "name": "BaseBdev1",
00:13:52.143        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:52.143        "is_configured": false,
00:13:52.143        "data_offset": 0,
00:13:52.143        "data_size": 0
00:13:52.143      },
00:13:52.143      {
00:13:52.143        "name": "BaseBdev2",
00:13:52.143        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:52.143        "is_configured": false,
00:13:52.143        "data_offset": 0,
00:13:52.143        "data_size": 0
00:13:52.143      }
00:13:52.143    ]
00:13:52.143  }'
00:13:52.143   23:47:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:52.143   23:47:22	-- common/autotest_common.sh@10 -- # set +x
00:13:52.710   23:47:23	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:52.970  [2024-12-13 23:47:23.492611] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:52.970  [2024-12-13 23:47:23.492658] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:13:52.970   23:47:23	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:53.229  [2024-12-13 23:47:23.744690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:53.229  [2024-12-13 23:47:23.744764] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:53.229  [2024-12-13 23:47:23.744776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:53.229  [2024-12-13 23:47:23.744799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:53.229   23:47:23	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:53.487  [2024-12-13 23:47:23.966041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:53.487  BaseBdev1
00:13:53.487   23:47:23	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:13:53.487   23:47:23	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:53.487   23:47:23	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:53.487   23:47:23	-- common/autotest_common.sh@899 -- # local i
00:13:53.487   23:47:23	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:53.487   23:47:23	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:53.488   23:47:23	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:53.488   23:47:24	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:53.746  [
00:13:53.746    {
00:13:53.746      "name": "BaseBdev1",
00:13:53.746      "aliases": [
00:13:53.746        "eb88cccd-3d1e-4720-b1b3-062c61221fb2"
00:13:53.746      ],
00:13:53.746      "product_name": "Malloc disk",
00:13:53.746      "block_size": 512,
00:13:53.746      "num_blocks": 65536,
00:13:53.746      "uuid": "eb88cccd-3d1e-4720-b1b3-062c61221fb2",
00:13:53.746      "assigned_rate_limits": {
00:13:53.746        "rw_ios_per_sec": 0,
00:13:53.746        "rw_mbytes_per_sec": 0,
00:13:53.746        "r_mbytes_per_sec": 0,
00:13:53.746        "w_mbytes_per_sec": 0
00:13:53.746      },
00:13:53.746      "claimed": true,
00:13:53.746      "claim_type": "exclusive_write",
00:13:53.746      "zoned": false,
00:13:53.746      "supported_io_types": {
00:13:53.746        "read": true,
00:13:53.746        "write": true,
00:13:53.746        "unmap": true,
00:13:53.746        "write_zeroes": true,
00:13:53.746        "flush": true,
00:13:53.746        "reset": true,
00:13:53.746        "compare": false,
00:13:53.746        "compare_and_write": false,
00:13:53.746        "abort": true,
00:13:53.746        "nvme_admin": false,
00:13:53.746        "nvme_io": false
00:13:53.746      },
00:13:53.746      "memory_domains": [
00:13:53.746        {
00:13:53.746          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:53.746          "dma_device_type": 2
00:13:53.746        }
00:13:53.746      ],
00:13:53.746      "driver_specific": {}
00:13:53.746    }
00:13:53.746  ]
00:13:53.746   23:47:24	-- common/autotest_common.sh@905 -- # return 0
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:53.746   23:47:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:53.747    23:47:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:53.747    23:47:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:54.005   23:47:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:54.005    "name": "Existed_Raid",
00:13:54.005    "uuid": "67ca10b3-08a2-4ee6-9fff-3f6115c40cfd",
00:13:54.005    "strip_size_kb": 64,
00:13:54.005    "state": "configuring",
00:13:54.005    "raid_level": "raid0",
00:13:54.005    "superblock": true,
00:13:54.005    "num_base_bdevs": 2,
00:13:54.005    "num_base_bdevs_discovered": 1,
00:13:54.005    "num_base_bdevs_operational": 2,
00:13:54.005    "base_bdevs_list": [
00:13:54.005      {
00:13:54.005        "name": "BaseBdev1",
00:13:54.005        "uuid": "eb88cccd-3d1e-4720-b1b3-062c61221fb2",
00:13:54.005        "is_configured": true,
00:13:54.005        "data_offset": 2048,
00:13:54.005        "data_size": 63488
00:13:54.005      },
00:13:54.005      {
00:13:54.005        "name": "BaseBdev2",
00:13:54.005        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:54.005        "is_configured": false,
00:13:54.005        "data_offset": 0,
00:13:54.005        "data_size": 0
00:13:54.005      }
00:13:54.005    ]
00:13:54.005  }'
00:13:54.005   23:47:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:54.005   23:47:24	-- common/autotest_common.sh@10 -- # set +x
00:13:54.580   23:47:25	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:54.840  [2024-12-13 23:47:25.442378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:54.840  [2024-12-13 23:47:25.442428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:13:54.840   23:47:25	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:13:54.840   23:47:25	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:55.098   23:47:25	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:55.357  BaseBdev1
00:13:55.357   23:47:25	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:13:55.357   23:47:25	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:55.357   23:47:25	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:55.357   23:47:25	-- common/autotest_common.sh@899 -- # local i
00:13:55.357   23:47:25	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:55.357   23:47:25	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:55.357   23:47:25	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:55.615   23:47:26	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:55.874  [
00:13:55.874    {
00:13:55.874      "name": "BaseBdev1",
00:13:55.874      "aliases": [
00:13:55.874        "50d75170-13d8-4c98-b6b0-a0cb4df97c5a"
00:13:55.874      ],
00:13:55.874      "product_name": "Malloc disk",
00:13:55.874      "block_size": 512,
00:13:55.874      "num_blocks": 65536,
00:13:55.874      "uuid": "50d75170-13d8-4c98-b6b0-a0cb4df97c5a",
00:13:55.874      "assigned_rate_limits": {
00:13:55.874        "rw_ios_per_sec": 0,
00:13:55.874        "rw_mbytes_per_sec": 0,
00:13:55.874        "r_mbytes_per_sec": 0,
00:13:55.874        "w_mbytes_per_sec": 0
00:13:55.874      },
00:13:55.874      "claimed": false,
00:13:55.874      "zoned": false,
00:13:55.874      "supported_io_types": {
00:13:55.874        "read": true,
00:13:55.874        "write": true,
00:13:55.874        "unmap": true,
00:13:55.874        "write_zeroes": true,
00:13:55.874        "flush": true,
00:13:55.874        "reset": true,
00:13:55.874        "compare": false,
00:13:55.874        "compare_and_write": false,
00:13:55.874        "abort": true,
00:13:55.874        "nvme_admin": false,
00:13:55.874        "nvme_io": false
00:13:55.874      },
00:13:55.874      "memory_domains": [
00:13:55.874        {
00:13:55.874          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:55.874          "dma_device_type": 2
00:13:55.874        }
00:13:55.874      ],
00:13:55.874      "driver_specific": {}
00:13:55.874    }
00:13:55.874  ]
00:13:55.874   23:47:26	-- common/autotest_common.sh@905 -- # return 0
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:55.874  [2024-12-13 23:47:26.567170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:55.874  [2024-12-13 23:47:26.569250] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:55.874  [2024-12-13 23:47:26.569327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:55.874   23:47:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:55.874    23:47:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:55.874    23:47:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:56.133   23:47:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:56.133    "name": "Existed_Raid",
00:13:56.133    "uuid": "aa07f847-baa1-4d9a-b494-9bfdbe9e3644",
00:13:56.133    "strip_size_kb": 64,
00:13:56.133    "state": "configuring",
00:13:56.133    "raid_level": "raid0",
00:13:56.133    "superblock": true,
00:13:56.133    "num_base_bdevs": 2,
00:13:56.133    "num_base_bdevs_discovered": 1,
00:13:56.133    "num_base_bdevs_operational": 2,
00:13:56.133    "base_bdevs_list": [
00:13:56.133      {
00:13:56.133        "name": "BaseBdev1",
00:13:56.133        "uuid": "50d75170-13d8-4c98-b6b0-a0cb4df97c5a",
00:13:56.133        "is_configured": true,
00:13:56.133        "data_offset": 2048,
00:13:56.133        "data_size": 63488
00:13:56.133      },
00:13:56.133      {
00:13:56.133        "name": "BaseBdev2",
00:13:56.133        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:56.133        "is_configured": false,
00:13:56.133        "data_offset": 0,
00:13:56.133        "data_size": 0
00:13:56.133      }
00:13:56.133    ]
00:13:56.133  }'
00:13:56.133   23:47:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:56.133   23:47:26	-- common/autotest_common.sh@10 -- # set +x
00:13:56.728   23:47:27	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:13:56.987  [2024-12-13 23:47:27.640170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:56.987  [2024-12-13 23:47:27.640446] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:13:56.987  [2024-12-13 23:47:27.640461] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:13:56.987  [2024-12-13 23:47:27.640622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:13:56.987  [2024-12-13 23:47:27.641004] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:13:56.987  [2024-12-13 23:47:27.641031] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:13:56.987  [2024-12-13 23:47:27.641238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:56.987  BaseBdev2
00:13:56.987   23:47:27	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:13:56.987   23:47:27	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:13:56.987   23:47:27	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:56.987   23:47:27	-- common/autotest_common.sh@899 -- # local i
00:13:56.987   23:47:27	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:56.987   23:47:27	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:56.987   23:47:27	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:57.246   23:47:27	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:13:57.504  [
00:13:57.504    {
00:13:57.504      "name": "BaseBdev2",
00:13:57.504      "aliases": [
00:13:57.504        "729ea8b7-b009-48ca-b1c2-f6e43598b9ae"
00:13:57.504      ],
00:13:57.504      "product_name": "Malloc disk",
00:13:57.504      "block_size": 512,
00:13:57.504      "num_blocks": 65536,
00:13:57.504      "uuid": "729ea8b7-b009-48ca-b1c2-f6e43598b9ae",
00:13:57.504      "assigned_rate_limits": {
00:13:57.504        "rw_ios_per_sec": 0,
00:13:57.504        "rw_mbytes_per_sec": 0,
00:13:57.504        "r_mbytes_per_sec": 0,
00:13:57.504        "w_mbytes_per_sec": 0
00:13:57.504      },
00:13:57.504      "claimed": true,
00:13:57.504      "claim_type": "exclusive_write",
00:13:57.504      "zoned": false,
00:13:57.504      "supported_io_types": {
00:13:57.504        "read": true,
00:13:57.504        "write": true,
00:13:57.504        "unmap": true,
00:13:57.504        "write_zeroes": true,
00:13:57.504        "flush": true,
00:13:57.504        "reset": true,
00:13:57.504        "compare": false,
00:13:57.504        "compare_and_write": false,
00:13:57.504        "abort": true,
00:13:57.504        "nvme_admin": false,
00:13:57.504        "nvme_io": false
00:13:57.504      },
00:13:57.504      "memory_domains": [
00:13:57.504        {
00:13:57.504          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:57.504          "dma_device_type": 2
00:13:57.504        }
00:13:57.504      ],
00:13:57.504      "driver_specific": {}
00:13:57.504    }
00:13:57.504  ]
00:13:57.504   23:47:28	-- common/autotest_common.sh@905 -- # return 0
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:57.505    23:47:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:57.505    23:47:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:57.505    "name": "Existed_Raid",
00:13:57.505    "uuid": "aa07f847-baa1-4d9a-b494-9bfdbe9e3644",
00:13:57.505    "strip_size_kb": 64,
00:13:57.505    "state": "online",
00:13:57.505    "raid_level": "raid0",
00:13:57.505    "superblock": true,
00:13:57.505    "num_base_bdevs": 2,
00:13:57.505    "num_base_bdevs_discovered": 2,
00:13:57.505    "num_base_bdevs_operational": 2,
00:13:57.505    "base_bdevs_list": [
00:13:57.505      {
00:13:57.505        "name": "BaseBdev1",
00:13:57.505        "uuid": "50d75170-13d8-4c98-b6b0-a0cb4df97c5a",
00:13:57.505        "is_configured": true,
00:13:57.505        "data_offset": 2048,
00:13:57.505        "data_size": 63488
00:13:57.505      },
00:13:57.505      {
00:13:57.505        "name": "BaseBdev2",
00:13:57.505        "uuid": "729ea8b7-b009-48ca-b1c2-f6e43598b9ae",
00:13:57.505        "is_configured": true,
00:13:57.505        "data_offset": 2048,
00:13:57.505        "data_size": 63488
00:13:57.505      }
00:13:57.505    ]
00:13:57.505  }'
00:13:57.505   23:47:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:57.505   23:47:28	-- common/autotest_common.sh@10 -- # set +x
00:13:58.071   23:47:28	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:58.330  [2024-12-13 23:47:29.042463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:58.330  [2024-12-13 23:47:29.042493] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:58.330  [2024-12-13 23:47:29.042550] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:58.588   23:47:29	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:58.589   23:47:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:58.589    23:47:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:58.589    23:47:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:58.847   23:47:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:58.847    "name": "Existed_Raid",
00:13:58.847    "uuid": "aa07f847-baa1-4d9a-b494-9bfdbe9e3644",
00:13:58.847    "strip_size_kb": 64,
00:13:58.847    "state": "offline",
00:13:58.847    "raid_level": "raid0",
00:13:58.847    "superblock": true,
00:13:58.847    "num_base_bdevs": 2,
00:13:58.847    "num_base_bdevs_discovered": 1,
00:13:58.847    "num_base_bdevs_operational": 1,
00:13:58.847    "base_bdevs_list": [
00:13:58.847      {
00:13:58.847        "name": null,
00:13:58.847        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:58.847        "is_configured": false,
00:13:58.847        "data_offset": 2048,
00:13:58.847        "data_size": 63488
00:13:58.847      },
00:13:58.847      {
00:13:58.847        "name": "BaseBdev2",
00:13:58.847        "uuid": "729ea8b7-b009-48ca-b1c2-f6e43598b9ae",
00:13:58.847        "is_configured": true,
00:13:58.847        "data_offset": 2048,
00:13:58.847        "data_size": 63488
00:13:58.847      }
00:13:58.847    ]
00:13:58.847  }'
00:13:58.847   23:47:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:58.847   23:47:29	-- common/autotest_common.sh@10 -- # set +x
00:13:59.415   23:47:29	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:13:59.415   23:47:29	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:59.415    23:47:29	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:59.415    23:47:29	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:13:59.415   23:47:30	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:13:59.415   23:47:30	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:13:59.415   23:47:30	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:13:59.674  [2024-12-13 23:47:30.374995] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:13:59.674  [2024-12-13 23:47:30.375069] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:13:59.933   23:47:30	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:13:59.933   23:47:30	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:59.933    23:47:30	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:13:59.933    23:47:30	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:00.192   23:47:30	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:00.192   23:47:30	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:00.192   23:47:30	-- bdev/bdev_raid.sh@287 -- # killprocess 112383
00:14:00.192   23:47:30	-- common/autotest_common.sh@936 -- # '[' -z 112383 ']'
00:14:00.192   23:47:30	-- common/autotest_common.sh@940 -- # kill -0 112383
00:14:00.192    23:47:30	-- common/autotest_common.sh@941 -- # uname
00:14:00.192   23:47:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:00.192    23:47:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112383
00:14:00.192   23:47:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:00.192   23:47:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:00.192   23:47:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 112383'
00:14:00.192  killing process with pid 112383
00:14:00.192   23:47:30	-- common/autotest_common.sh@955 -- # kill 112383
00:14:00.192  [2024-12-13 23:47:30.725318] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:00.192  [2024-12-13 23:47:30.725443] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:00.192   23:47:30	-- common/autotest_common.sh@960 -- # wait 112383
00:14:01.129  ************************************
00:14:01.129  END TEST raid_state_function_test_sb
00:14:01.129  ************************************
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:01.129  
00:14:01.129  real	0m10.493s
00:14:01.129  user	0m18.123s
00:14:01.129  sys	0m1.355s
00:14:01.129   23:47:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:01.129   23:47:31	-- common/autotest_common.sh@10 -- # set +x
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2
00:14:01.129   23:47:31	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:14:01.129   23:47:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:01.129   23:47:31	-- common/autotest_common.sh@10 -- # set +x
00:14:01.129  ************************************
00:14:01.129  START TEST raid_superblock_test
00:14:01.129  ************************************
00:14:01.129   23:47:31	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@357 -- # raid_pid=112707
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@358 -- # waitforlisten 112707 /var/tmp/spdk-raid.sock
00:14:01.129   23:47:31	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:14:01.129   23:47:31	-- common/autotest_common.sh@829 -- # '[' -z 112707 ']'
00:14:01.129   23:47:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:01.129   23:47:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:01.129  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:01.129   23:47:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:01.129   23:47:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:01.129   23:47:31	-- common/autotest_common.sh@10 -- # set +x
00:14:01.129  [2024-12-13 23:47:31.851067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:01.129  [2024-12-13 23:47:31.851290] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112707 ]
00:14:01.387  [2024-12-13 23:47:32.022493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:01.646  [2024-12-13 23:47:32.280831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:01.905  [2024-12-13 23:47:32.464349] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:02.164   23:47:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:02.164   23:47:32	-- common/autotest_common.sh@862 -- # return 0
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:02.164   23:47:32	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:14:02.423  malloc1
00:14:02.423   23:47:33	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:02.682  [2024-12-13 23:47:33.168153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:02.682  [2024-12-13 23:47:33.168237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:02.682  [2024-12-13 23:47:33.168267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:14:02.682  [2024-12-13 23:47:33.168311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:02.682  [2024-12-13 23:47:33.170707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:02.682  [2024-12-13 23:47:33.170755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:02.682  pt1
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:02.682   23:47:33	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:14:02.941  malloc2
00:14:02.941   23:47:33	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:02.941  [2024-12-13 23:47:33.640358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:02.941  [2024-12-13 23:47:33.640427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:02.941  [2024-12-13 23:47:33.640467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:14:02.942  [2024-12-13 23:47:33.640515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:02.942  [2024-12-13 23:47:33.642883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:02.942  [2024-12-13 23:47:33.642933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:02.942  pt2
00:14:02.942   23:47:33	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:02.942   23:47:33	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:02.942   23:47:33	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s
00:14:03.200  [2024-12-13 23:47:33.816432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:03.200  [2024-12-13 23:47:33.818351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:03.200  [2024-12-13 23:47:33.818532] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80
00:14:03.200  [2024-12-13 23:47:33.818547] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:03.201  [2024-12-13 23:47:33.818885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:14:03.201  [2024-12-13 23:47:33.819339] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80
00:14:03.201  [2024-12-13 23:47:33.819361] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80
00:14:03.201  [2024-12-13 23:47:33.819484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:03.201   23:47:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:03.201    23:47:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:03.201    23:47:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:03.459   23:47:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:03.459    "name": "raid_bdev1",
00:14:03.459    "uuid": "c4602872-254f-4997-91a0-51009ff5521e",
00:14:03.459    "strip_size_kb": 64,
00:14:03.459    "state": "online",
00:14:03.459    "raid_level": "raid0",
00:14:03.459    "superblock": true,
00:14:03.459    "num_base_bdevs": 2,
00:14:03.459    "num_base_bdevs_discovered": 2,
00:14:03.459    "num_base_bdevs_operational": 2,
00:14:03.459    "base_bdevs_list": [
00:14:03.459      {
00:14:03.459        "name": "pt1",
00:14:03.459        "uuid": "2a14822c-fdb6-5fb2-9415-04cf749117bb",
00:14:03.459        "is_configured": true,
00:14:03.459        "data_offset": 2048,
00:14:03.459        "data_size": 63488
00:14:03.459      },
00:14:03.459      {
00:14:03.459        "name": "pt2",
00:14:03.459        "uuid": "2fe1c9c7-18ec-5962-9d7f-3380740cdd10",
00:14:03.459        "is_configured": true,
00:14:03.459        "data_offset": 2048,
00:14:03.459        "data_size": 63488
00:14:03.459      }
00:14:03.459    ]
00:14:03.459  }'
00:14:03.459   23:47:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:03.459   23:47:34	-- common/autotest_common.sh@10 -- # set +x
00:14:04.027    23:47:34	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:04.027    23:47:34	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:14:04.286  [2024-12-13 23:47:34.830082] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:04.286   23:47:34	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c4602872-254f-4997-91a0-51009ff5521e
00:14:04.286   23:47:34	-- bdev/bdev_raid.sh@380 -- # '[' -z c4602872-254f-4997-91a0-51009ff5521e ']'
00:14:04.286   23:47:34	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:14:04.545  [2024-12-13 23:47:35.050204] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:04.545  [2024-12-13 23:47:35.050227] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:04.545  [2024-12-13 23:47:35.050293] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:04.545  [2024-12-13 23:47:35.050334] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:04.545  [2024-12-13 23:47:35.050344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline
00:14:04.545    23:47:35	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:04.545    23:47:35	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:04.803   23:47:35	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:14:05.062    23:47:35	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:14:05.062    23:47:35	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:14:05.321   23:47:35	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:14:05.321   23:47:35	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:14:05.321   23:47:35	-- common/autotest_common.sh@650 -- # local es=0
00:14:05.321   23:47:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:14:05.321   23:47:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:05.321   23:47:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:05.321    23:47:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:05.321   23:47:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:05.321    23:47:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:05.321   23:47:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:05.321   23:47:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:05.321   23:47:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:14:05.321   23:47:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:14:05.580  [2024-12-13 23:47:36.106866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:14:05.580  [2024-12-13 23:47:36.108785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:14:05.580  [2024-12-13 23:47:36.108846] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:14:05.580  [2024-12-13 23:47:36.108908] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:14:05.580  [2024-12-13 23:47:36.108943] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:05.580  [2024-12-13 23:47:36.108952] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring
00:14:05.580  request:
00:14:05.580  {
00:14:05.580    "name": "raid_bdev1",
00:14:05.580    "raid_level": "raid0",
00:14:05.580    "base_bdevs": [
00:14:05.580      "malloc1",
00:14:05.580      "malloc2"
00:14:05.580    ],
00:14:05.580    "superblock": false,
00:14:05.580    "strip_size_kb": 64,
00:14:05.580    "method": "bdev_raid_create",
00:14:05.580    "req_id": 1
00:14:05.580  }
00:14:05.580  Got JSON-RPC error response
00:14:05.581  response:
00:14:05.581  {
00:14:05.581    "code": -17,
00:14:05.581    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:14:05.581  }
00:14:05.581   23:47:36	-- common/autotest_common.sh@653 -- # es=1
00:14:05.581   23:47:36	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:05.581   23:47:36	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:05.581   23:47:36	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:05.581    23:47:36	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:05.581    23:47:36	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:14:05.581   23:47:36	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:14:05.581   23:47:36	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:14:05.581   23:47:36	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:05.840  [2024-12-13 23:47:36.466902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:05.840  [2024-12-13 23:47:36.466976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:05.840  [2024-12-13 23:47:36.467007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:14:05.840  [2024-12-13 23:47:36.467032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:05.840  [2024-12-13 23:47:36.469404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:05.840  [2024-12-13 23:47:36.469457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:05.840  [2024-12-13 23:47:36.469534] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:14:05.840  [2024-12-13 23:47:36.469614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:05.840  pt1
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:05.840   23:47:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:05.840    23:47:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:05.840    23:47:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:06.099   23:47:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:06.099    "name": "raid_bdev1",
00:14:06.099    "uuid": "c4602872-254f-4997-91a0-51009ff5521e",
00:14:06.099    "strip_size_kb": 64,
00:14:06.099    "state": "configuring",
00:14:06.099    "raid_level": "raid0",
00:14:06.099    "superblock": true,
00:14:06.099    "num_base_bdevs": 2,
00:14:06.099    "num_base_bdevs_discovered": 1,
00:14:06.099    "num_base_bdevs_operational": 2,
00:14:06.099    "base_bdevs_list": [
00:14:06.099      {
00:14:06.099        "name": "pt1",
00:14:06.099        "uuid": "2a14822c-fdb6-5fb2-9415-04cf749117bb",
00:14:06.099        "is_configured": true,
00:14:06.099        "data_offset": 2048,
00:14:06.099        "data_size": 63488
00:14:06.099      },
00:14:06.099      {
00:14:06.099        "name": null,
00:14:06.099        "uuid": "2fe1c9c7-18ec-5962-9d7f-3380740cdd10",
00:14:06.099        "is_configured": false,
00:14:06.099        "data_offset": 2048,
00:14:06.099        "data_size": 63488
00:14:06.099      }
00:14:06.099    ]
00:14:06.099  }'
00:14:06.099   23:47:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:06.099   23:47:36	-- common/autotest_common.sh@10 -- # set +x
00:14:06.667   23:47:37	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:14:06.667   23:47:37	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:14:06.667   23:47:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:06.667   23:47:37	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:06.926  [2024-12-13 23:47:37.519102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:06.926  [2024-12-13 23:47:37.519187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:06.926  [2024-12-13 23:47:37.519220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:14:06.926  [2024-12-13 23:47:37.519244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:06.926  [2024-12-13 23:47:37.519861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:06.926  [2024-12-13 23:47:37.519910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:06.926  [2024-12-13 23:47:37.519987] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:14:06.926  [2024-12-13 23:47:37.520008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:06.926  [2024-12-13 23:47:37.520093] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:14:06.926  [2024-12-13 23:47:37.520105] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:06.926  [2024-12-13 23:47:37.520519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:14:06.926  [2024-12-13 23:47:37.520902] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:14:06.926  [2024-12-13 23:47:37.520923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:14:06.926  [2024-12-13 23:47:37.521035] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:06.926  pt2
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:06.926   23:47:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:06.926    23:47:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:06.926    23:47:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:07.185   23:47:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:07.185    "name": "raid_bdev1",
00:14:07.185    "uuid": "c4602872-254f-4997-91a0-51009ff5521e",
00:14:07.185    "strip_size_kb": 64,
00:14:07.185    "state": "online",
00:14:07.185    "raid_level": "raid0",
00:14:07.185    "superblock": true,
00:14:07.185    "num_base_bdevs": 2,
00:14:07.185    "num_base_bdevs_discovered": 2,
00:14:07.185    "num_base_bdevs_operational": 2,
00:14:07.185    "base_bdevs_list": [
00:14:07.185      {
00:14:07.185        "name": "pt1",
00:14:07.185        "uuid": "2a14822c-fdb6-5fb2-9415-04cf749117bb",
00:14:07.185        "is_configured": true,
00:14:07.185        "data_offset": 2048,
00:14:07.185        "data_size": 63488
00:14:07.185      },
00:14:07.185      {
00:14:07.185        "name": "pt2",
00:14:07.185        "uuid": "2fe1c9c7-18ec-5962-9d7f-3380740cdd10",
00:14:07.185        "is_configured": true,
00:14:07.185        "data_offset": 2048,
00:14:07.185        "data_size": 63488
00:14:07.185      }
00:14:07.185    ]
00:14:07.185  }'
00:14:07.185   23:47:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:07.185   23:47:37	-- common/autotest_common.sh@10 -- # set +x
00:14:07.753    23:47:38	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:14:07.753    23:47:38	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:08.012  [2024-12-13 23:47:38.607804] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:08.012   23:47:38	-- bdev/bdev_raid.sh@430 -- # '[' c4602872-254f-4997-91a0-51009ff5521e '!=' c4602872-254f-4997-91a0-51009ff5521e ']'
00:14:08.012   23:47:38	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:14:08.012   23:47:38	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:08.012   23:47:38	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:08.012   23:47:38	-- bdev/bdev_raid.sh@511 -- # killprocess 112707
00:14:08.012   23:47:38	-- common/autotest_common.sh@936 -- # '[' -z 112707 ']'
00:14:08.012   23:47:38	-- common/autotest_common.sh@940 -- # kill -0 112707
00:14:08.012    23:47:38	-- common/autotest_common.sh@941 -- # uname
00:14:08.012   23:47:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:08.012    23:47:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112707
00:14:08.012   23:47:38	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:08.012   23:47:38	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:08.012   23:47:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 112707'
00:14:08.012  killing process with pid 112707
00:14:08.012   23:47:38	-- common/autotest_common.sh@955 -- # kill 112707
00:14:08.012  [2024-12-13 23:47:38.651435] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:08.012  [2024-12-13 23:47:38.651562] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:08.012   23:47:38	-- common/autotest_common.sh@960 -- # wait 112707
00:14:08.012  [2024-12-13 23:47:38.651946] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:08.012  [2024-12-13 23:47:38.652121] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:14:08.270  [2024-12-13 23:47:38.805953] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:09.206  ************************************
00:14:09.206  END TEST raid_superblock_test
00:14:09.206  ************************************
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@513 -- # return 0
00:14:09.206  
00:14:09.206  real	0m8.048s
00:14:09.206  user	0m13.526s
00:14:09.206  sys	0m1.032s
00:14:09.206   23:47:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:09.206   23:47:39	-- common/autotest_common.sh@10 -- # set +x
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false
00:14:09.206   23:47:39	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:09.206   23:47:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:09.206   23:47:39	-- common/autotest_common.sh@10 -- # set +x
00:14:09.206  ************************************
00:14:09.206  START TEST raid_state_function_test
00:14:09.206  ************************************
00:14:09.206   23:47:39	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:09.206    23:47:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@226 -- # raid_pid=112952
00:14:09.206  Process raid pid: 112952
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112952'
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@228 -- # waitforlisten 112952 /var/tmp/spdk-raid.sock
00:14:09.206   23:47:39	-- common/autotest_common.sh@829 -- # '[' -z 112952 ']'
00:14:09.206   23:47:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:09.206   23:47:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:09.206  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:09.206   23:47:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:09.206   23:47:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:09.206   23:47:39	-- common/autotest_common.sh@10 -- # set +x
00:14:09.206   23:47:39	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:09.465  [2024-12-13 23:47:39.949340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:09.465  [2024-12-13 23:47:39.949544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:09.465  [2024-12-13 23:47:40.111977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:09.724  [2024-12-13 23:47:40.328420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:09.983  [2024-12-13 23:47:40.496780] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:10.241   23:47:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:10.241   23:47:40	-- common/autotest_common.sh@862 -- # return 0
00:14:10.241   23:47:40	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:10.500  [2024-12-13 23:47:41.083994] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:10.500  [2024-12-13 23:47:41.084071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:10.500  [2024-12-13 23:47:41.084083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:10.500  [2024-12-13 23:47:41.084100] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:10.500   23:47:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:10.500    23:47:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:10.500    23:47:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:10.760   23:47:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:10.760    "name": "Existed_Raid",
00:14:10.760    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.760    "strip_size_kb": 64,
00:14:10.760    "state": "configuring",
00:14:10.760    "raid_level": "concat",
00:14:10.760    "superblock": false,
00:14:10.760    "num_base_bdevs": 2,
00:14:10.760    "num_base_bdevs_discovered": 0,
00:14:10.760    "num_base_bdevs_operational": 2,
00:14:10.760    "base_bdevs_list": [
00:14:10.760      {
00:14:10.760        "name": "BaseBdev1",
00:14:10.760        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.760        "is_configured": false,
00:14:10.760        "data_offset": 0,
00:14:10.760        "data_size": 0
00:14:10.760      },
00:14:10.760      {
00:14:10.760        "name": "BaseBdev2",
00:14:10.760        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.760        "is_configured": false,
00:14:10.760        "data_offset": 0,
00:14:10.760        "data_size": 0
00:14:10.760      }
00:14:10.760    ]
00:14:10.760  }'
00:14:10.760   23:47:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:10.760   23:47:41	-- common/autotest_common.sh@10 -- # set +x
00:14:11.327   23:47:42	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:11.585  [2024-12-13 23:47:42.248474] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:11.585  [2024-12-13 23:47:42.248511] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:14:11.585   23:47:42	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:11.843  [2024-12-13 23:47:42.504535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:11.843  [2024-12-13 23:47:42.504607] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:11.843  [2024-12-13 23:47:42.504618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:11.843  [2024-12-13 23:47:42.504644] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:11.843   23:47:42	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:12.102  [2024-12-13 23:47:42.790088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:12.102  BaseBdev1
00:14:12.102   23:47:42	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:12.102   23:47:42	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:12.102   23:47:42	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:12.102   23:47:42	-- common/autotest_common.sh@899 -- # local i
00:14:12.102   23:47:42	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:12.102   23:47:42	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:12.102   23:47:42	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:12.361   23:47:43	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:12.620  [
00:14:12.620    {
00:14:12.620      "name": "BaseBdev1",
00:14:12.620      "aliases": [
00:14:12.620        "baf731a1-eece-4549-b4c4-3e0e20bdc842"
00:14:12.620      ],
00:14:12.620      "product_name": "Malloc disk",
00:14:12.620      "block_size": 512,
00:14:12.620      "num_blocks": 65536,
00:14:12.620      "uuid": "baf731a1-eece-4549-b4c4-3e0e20bdc842",
00:14:12.620      "assigned_rate_limits": {
00:14:12.620        "rw_ios_per_sec": 0,
00:14:12.620        "rw_mbytes_per_sec": 0,
00:14:12.620        "r_mbytes_per_sec": 0,
00:14:12.620        "w_mbytes_per_sec": 0
00:14:12.620      },
00:14:12.620      "claimed": true,
00:14:12.620      "claim_type": "exclusive_write",
00:14:12.620      "zoned": false,
00:14:12.620      "supported_io_types": {
00:14:12.620        "read": true,
00:14:12.620        "write": true,
00:14:12.620        "unmap": true,
00:14:12.620        "write_zeroes": true,
00:14:12.620        "flush": true,
00:14:12.620        "reset": true,
00:14:12.620        "compare": false,
00:14:12.620        "compare_and_write": false,
00:14:12.620        "abort": true,
00:14:12.620        "nvme_admin": false,
00:14:12.620        "nvme_io": false
00:14:12.620      },
00:14:12.620      "memory_domains": [
00:14:12.620        {
00:14:12.620          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:12.620          "dma_device_type": 2
00:14:12.620        }
00:14:12.620      ],
00:14:12.620      "driver_specific": {}
00:14:12.620    }
00:14:12.620  ]
00:14:12.620   23:47:43	-- common/autotest_common.sh@905 -- # return 0
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:12.620   23:47:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:12.620    23:47:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:12.620    23:47:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:12.878   23:47:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:12.878    "name": "Existed_Raid",
00:14:12.878    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:12.878    "strip_size_kb": 64,
00:14:12.878    "state": "configuring",
00:14:12.878    "raid_level": "concat",
00:14:12.878    "superblock": false,
00:14:12.878    "num_base_bdevs": 2,
00:14:12.878    "num_base_bdevs_discovered": 1,
00:14:12.878    "num_base_bdevs_operational": 2,
00:14:12.878    "base_bdevs_list": [
00:14:12.878      {
00:14:12.878        "name": "BaseBdev1",
00:14:12.878        "uuid": "baf731a1-eece-4549-b4c4-3e0e20bdc842",
00:14:12.878        "is_configured": true,
00:14:12.878        "data_offset": 0,
00:14:12.878        "data_size": 65536
00:14:12.878      },
00:14:12.878      {
00:14:12.878        "name": "BaseBdev2",
00:14:12.878        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:12.878        "is_configured": false,
00:14:12.878        "data_offset": 0,
00:14:12.878        "data_size": 0
00:14:12.878      }
00:14:12.878    ]
00:14:12.878  }'
00:14:12.878   23:47:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:12.878   23:47:43	-- common/autotest_common.sh@10 -- # set +x
00:14:13.446   23:47:44	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:13.704  [2024-12-13 23:47:44.238352] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:13.704  [2024-12-13 23:47:44.238387] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:13.704  [2024-12-13 23:47:44.418415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:13.704  [2024-12-13 23:47:44.420301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:13.704  [2024-12-13 23:47:44.420472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:13.704   23:47:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:13.963   23:47:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:13.963   23:47:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:13.963    23:47:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:13.963    23:47:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:13.963   23:47:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:13.963    "name": "Existed_Raid",
00:14:13.963    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:13.963    "strip_size_kb": 64,
00:14:13.963    "state": "configuring",
00:14:13.963    "raid_level": "concat",
00:14:13.963    "superblock": false,
00:14:13.963    "num_base_bdevs": 2,
00:14:13.963    "num_base_bdevs_discovered": 1,
00:14:13.963    "num_base_bdevs_operational": 2,
00:14:13.963    "base_bdevs_list": [
00:14:13.963      {
00:14:13.963        "name": "BaseBdev1",
00:14:13.963        "uuid": "baf731a1-eece-4549-b4c4-3e0e20bdc842",
00:14:13.963        "is_configured": true,
00:14:13.963        "data_offset": 0,
00:14:13.963        "data_size": 65536
00:14:13.963      },
00:14:13.963      {
00:14:13.963        "name": "BaseBdev2",
00:14:13.963        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:13.963        "is_configured": false,
00:14:13.963        "data_offset": 0,
00:14:13.963        "data_size": 0
00:14:13.963      }
00:14:13.963    ]
00:14:13.963  }'
00:14:13.963   23:47:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:13.963   23:47:44	-- common/autotest_common.sh@10 -- # set +x
00:14:14.904   23:47:45	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:14.904  [2024-12-13 23:47:45.556069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:14.904  [2024-12-13 23:47:45.556258] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:14:14.904  [2024-12-13 23:47:45.556301] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:14:14.904  [2024-12-13 23:47:45.556506] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0
00:14:14.904  [2024-12-13 23:47:45.556975] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:14:14.904  [2024-12-13 23:47:45.557114] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:14:14.904  [2024-12-13 23:47:45.557452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:14.904  BaseBdev2
00:14:14.904   23:47:45	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:14.904   23:47:45	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:14.904   23:47:45	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:14.904   23:47:45	-- common/autotest_common.sh@899 -- # local i
00:14:14.904   23:47:45	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:14.904   23:47:45	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:14.904   23:47:45	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:15.163   23:47:45	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:15.422  [
00:14:15.422    {
00:14:15.422      "name": "BaseBdev2",
00:14:15.422      "aliases": [
00:14:15.422        "2170721f-b18b-45e7-a5df-cec4becd0553"
00:14:15.422      ],
00:14:15.422      "product_name": "Malloc disk",
00:14:15.422      "block_size": 512,
00:14:15.422      "num_blocks": 65536,
00:14:15.422      "uuid": "2170721f-b18b-45e7-a5df-cec4becd0553",
00:14:15.422      "assigned_rate_limits": {
00:14:15.422        "rw_ios_per_sec": 0,
00:14:15.422        "rw_mbytes_per_sec": 0,
00:14:15.422        "r_mbytes_per_sec": 0,
00:14:15.422        "w_mbytes_per_sec": 0
00:14:15.422      },
00:14:15.422      "claimed": true,
00:14:15.422      "claim_type": "exclusive_write",
00:14:15.422      "zoned": false,
00:14:15.422      "supported_io_types": {
00:14:15.422        "read": true,
00:14:15.422        "write": true,
00:14:15.422        "unmap": true,
00:14:15.422        "write_zeroes": true,
00:14:15.422        "flush": true,
00:14:15.422        "reset": true,
00:14:15.422        "compare": false,
00:14:15.422        "compare_and_write": false,
00:14:15.422        "abort": true,
00:14:15.422        "nvme_admin": false,
00:14:15.422        "nvme_io": false
00:14:15.422      },
00:14:15.422      "memory_domains": [
00:14:15.422        {
00:14:15.422          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:15.422          "dma_device_type": 2
00:14:15.422        }
00:14:15.422      ],
00:14:15.422      "driver_specific": {}
00:14:15.422    }
00:14:15.422  ]
00:14:15.422   23:47:46	-- common/autotest_common.sh@905 -- # return 0
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:15.422   23:47:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:15.422    23:47:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:15.422    23:47:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:15.680   23:47:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:15.680    "name": "Existed_Raid",
00:14:15.680    "uuid": "6d6171a9-b3a9-4d0e-8c7b-567e9737cc08",
00:14:15.680    "strip_size_kb": 64,
00:14:15.680    "state": "online",
00:14:15.680    "raid_level": "concat",
00:14:15.680    "superblock": false,
00:14:15.680    "num_base_bdevs": 2,
00:14:15.680    "num_base_bdevs_discovered": 2,
00:14:15.680    "num_base_bdevs_operational": 2,
00:14:15.680    "base_bdevs_list": [
00:14:15.680      {
00:14:15.680        "name": "BaseBdev1",
00:14:15.680        "uuid": "baf731a1-eece-4549-b4c4-3e0e20bdc842",
00:14:15.680        "is_configured": true,
00:14:15.680        "data_offset": 0,
00:14:15.680        "data_size": 65536
00:14:15.680      },
00:14:15.680      {
00:14:15.680        "name": "BaseBdev2",
00:14:15.680        "uuid": "2170721f-b18b-45e7-a5df-cec4becd0553",
00:14:15.680        "is_configured": true,
00:14:15.680        "data_offset": 0,
00:14:15.680        "data_size": 65536
00:14:15.680      }
00:14:15.680    ]
00:14:15.680  }'
00:14:15.680   23:47:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:15.680   23:47:46	-- common/autotest_common.sh@10 -- # set +x
00:14:16.274   23:47:46	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:16.549  [2024-12-13 23:47:47.036368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:16.549  [2024-12-13 23:47:47.036512] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:16.549  [2024-12-13 23:47:47.036676] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:16.549   23:47:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:16.549    23:47:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:16.549    23:47:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:16.808   23:47:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:16.808    "name": "Existed_Raid",
00:14:16.808    "uuid": "6d6171a9-b3a9-4d0e-8c7b-567e9737cc08",
00:14:16.808    "strip_size_kb": 64,
00:14:16.808    "state": "offline",
00:14:16.808    "raid_level": "concat",
00:14:16.808    "superblock": false,
00:14:16.808    "num_base_bdevs": 2,
00:14:16.808    "num_base_bdevs_discovered": 1,
00:14:16.808    "num_base_bdevs_operational": 1,
00:14:16.808    "base_bdevs_list": [
00:14:16.808      {
00:14:16.808        "name": null,
00:14:16.808        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:16.808        "is_configured": false,
00:14:16.808        "data_offset": 0,
00:14:16.808        "data_size": 65536
00:14:16.808      },
00:14:16.808      {
00:14:16.808        "name": "BaseBdev2",
00:14:16.808        "uuid": "2170721f-b18b-45e7-a5df-cec4becd0553",
00:14:16.808        "is_configured": true,
00:14:16.808        "data_offset": 0,
00:14:16.808        "data_size": 65536
00:14:16.808      }
00:14:16.808    ]
00:14:16.808  }'
00:14:16.808   23:47:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:16.808   23:47:47	-- common/autotest_common.sh@10 -- # set +x
00:14:17.373   23:47:47	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:17.373   23:47:47	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:17.373    23:47:47	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:17.373    23:47:47	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:17.631   23:47:48	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:17.631   23:47:48	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:17.631   23:47:48	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:17.888  [2024-12-13 23:47:48.424320] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:17.888  [2024-12-13 23:47:48.424507] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:14:17.888   23:47:48	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:17.888   23:47:48	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:17.888    23:47:48	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:17.888    23:47:48	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:18.147   23:47:48	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:18.147   23:47:48	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:18.147   23:47:48	-- bdev/bdev_raid.sh@287 -- # killprocess 112952
00:14:18.147   23:47:48	-- common/autotest_common.sh@936 -- # '[' -z 112952 ']'
00:14:18.147   23:47:48	-- common/autotest_common.sh@940 -- # kill -0 112952
00:14:18.147    23:47:48	-- common/autotest_common.sh@941 -- # uname
00:14:18.147   23:47:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:18.147    23:47:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112952
00:14:18.147   23:47:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:18.147   23:47:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:18.147   23:47:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 112952'
00:14:18.147  killing process with pid 112952
00:14:18.147   23:47:48	-- common/autotest_common.sh@955 -- # kill 112952
00:14:18.147  [2024-12-13 23:47:48.787474] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:18.147   23:47:48	-- common/autotest_common.sh@960 -- # wait 112952
00:14:18.147  [2024-12-13 23:47:48.787768] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:19.084  
00:14:19.084  real	0m9.818s
00:14:19.084  user	0m17.234s
00:14:19.084  sys	0m1.088s
00:14:19.084   23:47:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:19.084   23:47:49	-- common/autotest_common.sh@10 -- # set +x
00:14:19.084  ************************************
00:14:19.084  END TEST raid_state_function_test
00:14:19.084  ************************************
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true
00:14:19.084   23:47:49	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:19.084   23:47:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:19.084   23:47:49	-- common/autotest_common.sh@10 -- # set +x
00:14:19.084  ************************************
00:14:19.084  START TEST raid_state_function_test_sb
00:14:19.084  ************************************
00:14:19.084   23:47:49	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:19.084    23:47:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@226 -- # raid_pid=113273
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113273'
00:14:19.084  Process raid pid: 113273
00:14:19.084   23:47:49	-- bdev/bdev_raid.sh@228 -- # waitforlisten 113273 /var/tmp/spdk-raid.sock
00:14:19.084   23:47:49	-- common/autotest_common.sh@829 -- # '[' -z 113273 ']'
00:14:19.084   23:47:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:19.084   23:47:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:19.084   23:47:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:19.084  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:19.084   23:47:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:19.084   23:47:49	-- common/autotest_common.sh@10 -- # set +x
00:14:19.343  [2024-12-13 23:47:49.835244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:19.343  [2024-12-13 23:47:49.835602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:19.343  [2024-12-13 23:47:49.989961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:19.602  [2024-12-13 23:47:50.150531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:19.602  [2024-12-13 23:47:50.317523] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:20.170   23:47:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:20.170   23:47:50	-- common/autotest_common.sh@862 -- # return 0
00:14:20.170   23:47:50	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:20.429  [2024-12-13 23:47:51.033572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:20.429  [2024-12-13 23:47:51.033897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:20.429  [2024-12-13 23:47:51.034004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:20.429  [2024-12-13 23:47:51.034065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:20.429   23:47:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:20.429    23:47:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:20.429    23:47:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:20.688   23:47:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:20.688    "name": "Existed_Raid",
00:14:20.688    "uuid": "ab838b01-62dd-4f59-a9b5-d4cf29db5586",
00:14:20.688    "strip_size_kb": 64,
00:14:20.688    "state": "configuring",
00:14:20.688    "raid_level": "concat",
00:14:20.688    "superblock": true,
00:14:20.688    "num_base_bdevs": 2,
00:14:20.688    "num_base_bdevs_discovered": 0,
00:14:20.688    "num_base_bdevs_operational": 2,
00:14:20.688    "base_bdevs_list": [
00:14:20.688      {
00:14:20.688        "name": "BaseBdev1",
00:14:20.688        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.688        "is_configured": false,
00:14:20.688        "data_offset": 0,
00:14:20.688        "data_size": 0
00:14:20.688      },
00:14:20.688      {
00:14:20.688        "name": "BaseBdev2",
00:14:20.688        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.688        "is_configured": false,
00:14:20.688        "data_offset": 0,
00:14:20.688        "data_size": 0
00:14:20.688      }
00:14:20.688    ]
00:14:20.688  }'
00:14:20.688   23:47:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:20.688   23:47:51	-- common/autotest_common.sh@10 -- # set +x
00:14:21.256   23:47:51	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:21.515  [2024-12-13 23:47:52.017619] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:21.515  [2024-12-13 23:47:52.017779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:14:21.515   23:47:52	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:21.773  [2024-12-13 23:47:52.261689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:21.773  [2024-12-13 23:47:52.261885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:21.773  [2024-12-13 23:47:52.262022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:21.773  [2024-12-13 23:47:52.262154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:21.773   23:47:52	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:21.773  [2024-12-13 23:47:52.499131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:21.773  BaseBdev1
00:14:22.032   23:47:52	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:22.032   23:47:52	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:22.032   23:47:52	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:22.032   23:47:52	-- common/autotest_common.sh@899 -- # local i
00:14:22.032   23:47:52	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:22.032   23:47:52	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:22.032   23:47:52	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:22.032   23:47:52	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:22.291  [
00:14:22.291    {
00:14:22.291      "name": "BaseBdev1",
00:14:22.291      "aliases": [
00:14:22.291        "edec6553-6d52-49ed-860c-ce5b0cd02121"
00:14:22.291      ],
00:14:22.291      "product_name": "Malloc disk",
00:14:22.291      "block_size": 512,
00:14:22.291      "num_blocks": 65536,
00:14:22.291      "uuid": "edec6553-6d52-49ed-860c-ce5b0cd02121",
00:14:22.291      "assigned_rate_limits": {
00:14:22.291        "rw_ios_per_sec": 0,
00:14:22.291        "rw_mbytes_per_sec": 0,
00:14:22.291        "r_mbytes_per_sec": 0,
00:14:22.291        "w_mbytes_per_sec": 0
00:14:22.291      },
00:14:22.291      "claimed": true,
00:14:22.291      "claim_type": "exclusive_write",
00:14:22.291      "zoned": false,
00:14:22.291      "supported_io_types": {
00:14:22.291        "read": true,
00:14:22.291        "write": true,
00:14:22.291        "unmap": true,
00:14:22.291        "write_zeroes": true,
00:14:22.291        "flush": true,
00:14:22.291        "reset": true,
00:14:22.291        "compare": false,
00:14:22.291        "compare_and_write": false,
00:14:22.291        "abort": true,
00:14:22.291        "nvme_admin": false,
00:14:22.291        "nvme_io": false
00:14:22.291      },
00:14:22.291      "memory_domains": [
00:14:22.291        {
00:14:22.291          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:22.291          "dma_device_type": 2
00:14:22.291        }
00:14:22.291      ],
00:14:22.291      "driver_specific": {}
00:14:22.291    }
00:14:22.291  ]
00:14:22.291   23:47:52	-- common/autotest_common.sh@905 -- # return 0
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:22.291   23:47:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:22.291    23:47:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:22.291    23:47:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:22.550   23:47:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:22.550    "name": "Existed_Raid",
00:14:22.550    "uuid": "3a9bf8ab-046f-4fdc-9547-dda578082502",
00:14:22.550    "strip_size_kb": 64,
00:14:22.550    "state": "configuring",
00:14:22.550    "raid_level": "concat",
00:14:22.550    "superblock": true,
00:14:22.550    "num_base_bdevs": 2,
00:14:22.550    "num_base_bdevs_discovered": 1,
00:14:22.550    "num_base_bdevs_operational": 2,
00:14:22.550    "base_bdevs_list": [
00:14:22.550      {
00:14:22.550        "name": "BaseBdev1",
00:14:22.550        "uuid": "edec6553-6d52-49ed-860c-ce5b0cd02121",
00:14:22.550        "is_configured": true,
00:14:22.550        "data_offset": 2048,
00:14:22.550        "data_size": 63488
00:14:22.550      },
00:14:22.550      {
00:14:22.550        "name": "BaseBdev2",
00:14:22.550        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:22.550        "is_configured": false,
00:14:22.550        "data_offset": 0,
00:14:22.550        "data_size": 0
00:14:22.550      }
00:14:22.550    ]
00:14:22.550  }'
00:14:22.550   23:47:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:22.550   23:47:53	-- common/autotest_common.sh@10 -- # set +x
00:14:23.117   23:47:53	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:23.376  [2024-12-13 23:47:53.988341] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:23.376  [2024-12-13 23:47:53.988534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:14:23.376   23:47:53	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:14:23.376   23:47:53	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:23.634   23:47:54	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:23.893  BaseBdev1
00:14:23.893   23:47:54	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:14:23.893   23:47:54	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:23.893   23:47:54	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:23.893   23:47:54	-- common/autotest_common.sh@899 -- # local i
00:14:23.893   23:47:54	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:23.893   23:47:54	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:23.893   23:47:54	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:24.152   23:47:54	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:24.410  [
00:14:24.410    {
00:14:24.410      "name": "BaseBdev1",
00:14:24.410      "aliases": [
00:14:24.410        "9b69574a-97c3-40d3-9fba-d27a1d207c71"
00:14:24.410      ],
00:14:24.411      "product_name": "Malloc disk",
00:14:24.411      "block_size": 512,
00:14:24.411      "num_blocks": 65536,
00:14:24.411      "uuid": "9b69574a-97c3-40d3-9fba-d27a1d207c71",
00:14:24.411      "assigned_rate_limits": {
00:14:24.411        "rw_ios_per_sec": 0,
00:14:24.411        "rw_mbytes_per_sec": 0,
00:14:24.411        "r_mbytes_per_sec": 0,
00:14:24.411        "w_mbytes_per_sec": 0
00:14:24.411      },
00:14:24.411      "claimed": false,
00:14:24.411      "zoned": false,
00:14:24.411      "supported_io_types": {
00:14:24.411        "read": true,
00:14:24.411        "write": true,
00:14:24.411        "unmap": true,
00:14:24.411        "write_zeroes": true,
00:14:24.411        "flush": true,
00:14:24.411        "reset": true,
00:14:24.411        "compare": false,
00:14:24.411        "compare_and_write": false,
00:14:24.411        "abort": true,
00:14:24.411        "nvme_admin": false,
00:14:24.411        "nvme_io": false
00:14:24.411      },
00:14:24.411      "memory_domains": [
00:14:24.411        {
00:14:24.411          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:24.411          "dma_device_type": 2
00:14:24.411        }
00:14:24.411      ],
00:14:24.411      "driver_specific": {}
00:14:24.411    }
00:14:24.411  ]
00:14:24.411   23:47:54	-- common/autotest_common.sh@905 -- # return 0
00:14:24.411   23:47:54	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:24.411  [2024-12-13 23:47:55.090425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:24.411  [2024-12-13 23:47:55.092398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:24.411  [2024-12-13 23:47:55.092602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:24.411   23:47:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:24.411    23:47:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:24.411    23:47:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:24.668   23:47:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:24.668    "name": "Existed_Raid",
00:14:24.668    "uuid": "ef0b0b78-e034-4062-8fdc-784cd64d195f",
00:14:24.668    "strip_size_kb": 64,
00:14:24.668    "state": "configuring",
00:14:24.668    "raid_level": "concat",
00:14:24.668    "superblock": true,
00:14:24.668    "num_base_bdevs": 2,
00:14:24.668    "num_base_bdevs_discovered": 1,
00:14:24.668    "num_base_bdevs_operational": 2,
00:14:24.668    "base_bdevs_list": [
00:14:24.668      {
00:14:24.668        "name": "BaseBdev1",
00:14:24.668        "uuid": "9b69574a-97c3-40d3-9fba-d27a1d207c71",
00:14:24.668        "is_configured": true,
00:14:24.668        "data_offset": 2048,
00:14:24.668        "data_size": 63488
00:14:24.668      },
00:14:24.668      {
00:14:24.668        "name": "BaseBdev2",
00:14:24.668        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:24.668        "is_configured": false,
00:14:24.668        "data_offset": 0,
00:14:24.668        "data_size": 0
00:14:24.668      }
00:14:24.668    ]
00:14:24.668  }'
00:14:24.668   23:47:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:24.668   23:47:55	-- common/autotest_common.sh@10 -- # set +x
00:14:25.235   23:47:55	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:25.494  [2024-12-13 23:47:56.116232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:25.494  [2024-12-13 23:47:56.116579] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:14:25.494  [2024-12-13 23:47:56.116707] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:25.494  BaseBdev2
00:14:25.494  [2024-12-13 23:47:56.116858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:14:25.494  [2024-12-13 23:47:56.117273] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:14:25.494  [2024-12-13 23:47:56.117429] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:14:25.494  [2024-12-13 23:47:56.117698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:25.494   23:47:56	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:25.494   23:47:56	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:25.494   23:47:56	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:25.494   23:47:56	-- common/autotest_common.sh@899 -- # local i
00:14:25.494   23:47:56	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:25.494   23:47:56	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:25.494   23:47:56	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:25.752   23:47:56	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:26.010  [
00:14:26.010    {
00:14:26.010      "name": "BaseBdev2",
00:14:26.010      "aliases": [
00:14:26.010        "a90fb61e-3ab1-4ec7-9bbe-bc6f1e7e5d13"
00:14:26.010      ],
00:14:26.010      "product_name": "Malloc disk",
00:14:26.010      "block_size": 512,
00:14:26.010      "num_blocks": 65536,
00:14:26.010      "uuid": "a90fb61e-3ab1-4ec7-9bbe-bc6f1e7e5d13",
00:14:26.010      "assigned_rate_limits": {
00:14:26.010        "rw_ios_per_sec": 0,
00:14:26.010        "rw_mbytes_per_sec": 0,
00:14:26.010        "r_mbytes_per_sec": 0,
00:14:26.010        "w_mbytes_per_sec": 0
00:14:26.010      },
00:14:26.010      "claimed": true,
00:14:26.010      "claim_type": "exclusive_write",
00:14:26.010      "zoned": false,
00:14:26.010      "supported_io_types": {
00:14:26.010        "read": true,
00:14:26.010        "write": true,
00:14:26.010        "unmap": true,
00:14:26.010        "write_zeroes": true,
00:14:26.010        "flush": true,
00:14:26.010        "reset": true,
00:14:26.010        "compare": false,
00:14:26.010        "compare_and_write": false,
00:14:26.010        "abort": true,
00:14:26.010        "nvme_admin": false,
00:14:26.010        "nvme_io": false
00:14:26.010      },
00:14:26.010      "memory_domains": [
00:14:26.010        {
00:14:26.010          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:26.010          "dma_device_type": 2
00:14:26.010        }
00:14:26.010      ],
00:14:26.010      "driver_specific": {}
00:14:26.010    }
00:14:26.010  ]
00:14:26.010   23:47:56	-- common/autotest_common.sh@905 -- # return 0
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:26.010   23:47:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:26.010    23:47:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:26.010    23:47:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:26.269   23:47:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:26.269    "name": "Existed_Raid",
00:14:26.269    "uuid": "ef0b0b78-e034-4062-8fdc-784cd64d195f",
00:14:26.269    "strip_size_kb": 64,
00:14:26.269    "state": "online",
00:14:26.269    "raid_level": "concat",
00:14:26.269    "superblock": true,
00:14:26.269    "num_base_bdevs": 2,
00:14:26.269    "num_base_bdevs_discovered": 2,
00:14:26.269    "num_base_bdevs_operational": 2,
00:14:26.269    "base_bdevs_list": [
00:14:26.269      {
00:14:26.269        "name": "BaseBdev1",
00:14:26.269        "uuid": "9b69574a-97c3-40d3-9fba-d27a1d207c71",
00:14:26.269        "is_configured": true,
00:14:26.269        "data_offset": 2048,
00:14:26.269        "data_size": 63488
00:14:26.269      },
00:14:26.269      {
00:14:26.269        "name": "BaseBdev2",
00:14:26.269        "uuid": "a90fb61e-3ab1-4ec7-9bbe-bc6f1e7e5d13",
00:14:26.269        "is_configured": true,
00:14:26.269        "data_offset": 2048,
00:14:26.269        "data_size": 63488
00:14:26.269      }
00:14:26.269    ]
00:14:26.269  }'
00:14:26.269   23:47:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:26.269   23:47:56	-- common/autotest_common.sh@10 -- # set +x
00:14:26.835   23:47:57	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:27.093  [2024-12-13 23:47:57.599608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:27.093  [2024-12-13 23:47:57.599765] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:27.093  [2024-12-13 23:47:57.599923] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:27.093   23:47:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:27.093    23:47:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:27.093    23:47:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:27.351   23:47:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:27.351    "name": "Existed_Raid",
00:14:27.351    "uuid": "ef0b0b78-e034-4062-8fdc-784cd64d195f",
00:14:27.351    "strip_size_kb": 64,
00:14:27.351    "state": "offline",
00:14:27.351    "raid_level": "concat",
00:14:27.351    "superblock": true,
00:14:27.351    "num_base_bdevs": 2,
00:14:27.351    "num_base_bdevs_discovered": 1,
00:14:27.351    "num_base_bdevs_operational": 1,
00:14:27.351    "base_bdevs_list": [
00:14:27.351      {
00:14:27.351        "name": null,
00:14:27.351        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:27.351        "is_configured": false,
00:14:27.351        "data_offset": 2048,
00:14:27.351        "data_size": 63488
00:14:27.351      },
00:14:27.351      {
00:14:27.351        "name": "BaseBdev2",
00:14:27.351        "uuid": "a90fb61e-3ab1-4ec7-9bbe-bc6f1e7e5d13",
00:14:27.351        "is_configured": true,
00:14:27.351        "data_offset": 2048,
00:14:27.351        "data_size": 63488
00:14:27.351      }
00:14:27.351    ]
00:14:27.351  }'
00:14:27.351   23:47:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:27.351   23:47:57	-- common/autotest_common.sh@10 -- # set +x
00:14:27.918   23:47:58	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:27.918   23:47:58	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:27.918    23:47:58	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:27.918    23:47:58	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:28.176   23:47:58	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:28.176   23:47:58	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:28.176   23:47:58	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:28.434  [2024-12-13 23:47:58.999611] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:28.434  [2024-12-13 23:47:58.999841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:14:28.434   23:47:59	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:28.434   23:47:59	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:28.434    23:47:59	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:28.434    23:47:59	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:28.693   23:47:59	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:28.693   23:47:59	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:28.693   23:47:59	-- bdev/bdev_raid.sh@287 -- # killprocess 113273
00:14:28.693   23:47:59	-- common/autotest_common.sh@936 -- # '[' -z 113273 ']'
00:14:28.693   23:47:59	-- common/autotest_common.sh@940 -- # kill -0 113273
00:14:28.693    23:47:59	-- common/autotest_common.sh@941 -- # uname
00:14:28.693   23:47:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:28.693    23:47:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113273
00:14:28.693   23:47:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:28.693   23:47:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:28.693   23:47:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 113273'
00:14:28.693  killing process with pid 113273
00:14:28.693   23:47:59	-- common/autotest_common.sh@955 -- # kill 113273
00:14:28.693  [2024-12-13 23:47:59.331785] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:28.693   23:47:59	-- common/autotest_common.sh@960 -- # wait 113273
00:14:28.693  [2024-12-13 23:47:59.332090] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:29.629  ************************************
00:14:29.629  END TEST raid_state_function_test_sb
00:14:29.629  ************************************
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:29.629  
00:14:29.629  real	0m10.478s
00:14:29.629  user	0m18.156s
00:14:29.629  sys	0m1.341s
00:14:29.629   23:48:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:29.629   23:48:00	-- common/autotest_common.sh@10 -- # set +x
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2
00:14:29.629   23:48:00	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:14:29.629   23:48:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:29.629   23:48:00	-- common/autotest_common.sh@10 -- # set +x
00:14:29.629  ************************************
00:14:29.629  START TEST raid_superblock_test
00:14:29.629  ************************************
00:14:29.629   23:48:00	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@357 -- # raid_pid=113603
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@358 -- # waitforlisten 113603 /var/tmp/spdk-raid.sock
00:14:29.629   23:48:00	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:14:29.629   23:48:00	-- common/autotest_common.sh@829 -- # '[' -z 113603 ']'
00:14:29.629   23:48:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:29.629   23:48:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:29.629   23:48:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:29.629  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:29.629   23:48:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:29.629   23:48:00	-- common/autotest_common.sh@10 -- # set +x
00:14:29.888  [2024-12-13 23:48:00.381172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:29.888  [2024-12-13 23:48:00.382175] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113603 ]
00:14:29.888  [2024-12-13 23:48:00.551251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:30.147  [2024-12-13 23:48:00.705110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:30.147  [2024-12-13 23:48:00.869988] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:30.715   23:48:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:30.715   23:48:01	-- common/autotest_common.sh@862 -- # return 0
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:30.715   23:48:01	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:14:30.973  malloc1
00:14:30.973   23:48:01	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:31.231  [2024-12-13 23:48:01.801223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:31.231  [2024-12-13 23:48:01.801573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:31.231  [2024-12-13 23:48:01.801666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:14:31.231  [2024-12-13 23:48:01.802002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:31.231  [2024-12-13 23:48:01.804428] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:31.231  [2024-12-13 23:48:01.804597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:31.231  pt1
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:31.231   23:48:01	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:14:31.490  malloc2
00:14:31.490   23:48:02	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:31.748  [2024-12-13 23:48:02.252049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:31.748  [2024-12-13 23:48:02.254024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:31.748  [2024-12-13 23:48:02.254261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:14:31.748  [2024-12-13 23:48:02.254678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:31.748  [2024-12-13 23:48:02.260254] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:31.748  [2024-12-13 23:48:02.260588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:31.748  pt2
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s
00:14:31.748  [2024-12-13 23:48:02.449207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:31.748  [2024-12-13 23:48:02.451306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:31.748  [2024-12-13 23:48:02.451602] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80
00:14:31.748  [2024-12-13 23:48:02.451721] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:31.748  [2024-12-13 23:48:02.451878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:14:31.748  [2024-12-13 23:48:02.452351] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80
00:14:31.748  [2024-12-13 23:48:02.452469] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80
00:14:31.748  [2024-12-13 23:48:02.452695] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:31.748   23:48:02	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:31.748    23:48:02	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:31.748    23:48:02	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:32.007   23:48:02	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:32.007    "name": "raid_bdev1",
00:14:32.007    "uuid": "4d4080f0-bfce-430c-8e46-dea060c76ccc",
00:14:32.007    "strip_size_kb": 64,
00:14:32.007    "state": "online",
00:14:32.007    "raid_level": "concat",
00:14:32.007    "superblock": true,
00:14:32.007    "num_base_bdevs": 2,
00:14:32.007    "num_base_bdevs_discovered": 2,
00:14:32.007    "num_base_bdevs_operational": 2,
00:14:32.007    "base_bdevs_list": [
00:14:32.007      {
00:14:32.007        "name": "pt1",
00:14:32.007        "uuid": "0f0296c9-9158-56c3-b16d-343db05d6ce5",
00:14:32.007        "is_configured": true,
00:14:32.007        "data_offset": 2048,
00:14:32.007        "data_size": 63488
00:14:32.007      },
00:14:32.007      {
00:14:32.007        "name": "pt2",
00:14:32.007        "uuid": "70ec1b88-e69c-5be5-ba66-5610203f357c",
00:14:32.007        "is_configured": true,
00:14:32.007        "data_offset": 2048,
00:14:32.007        "data_size": 63488
00:14:32.007      }
00:14:32.007    ]
00:14:32.007  }'
00:14:32.007   23:48:02	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:32.007   23:48:02	-- common/autotest_common.sh@10 -- # set +x
00:14:32.573    23:48:03	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:14:32.573    23:48:03	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:32.832  [2024-12-13 23:48:03.457427] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:32.832   23:48:03	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4d4080f0-bfce-430c-8e46-dea060c76ccc
00:14:32.832   23:48:03	-- bdev/bdev_raid.sh@380 -- # '[' -z 4d4080f0-bfce-430c-8e46-dea060c76ccc ']'
00:14:32.832   23:48:03	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:14:33.090  [2024-12-13 23:48:03.685334] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:33.090  [2024-12-13 23:48:03.685482] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:33.090  [2024-12-13 23:48:03.685702] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:33.090  [2024-12-13 23:48:03.685843] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:33.090  [2024-12-13 23:48:03.685939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline
00:14:33.090    23:48:03	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:33.090    23:48:03	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:14:33.349   23:48:03	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:14:33.349   23:48:03	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:14:33.349   23:48:03	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:33.349   23:48:03	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:14:33.607   23:48:04	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:33.607   23:48:04	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:14:33.865    23:48:04	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:14:33.865    23:48:04	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:14:33.865   23:48:04	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:14:33.865   23:48:04	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:33.865   23:48:04	-- common/autotest_common.sh@650 -- # local es=0
00:14:33.865   23:48:04	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:33.865   23:48:04	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:33.865   23:48:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:33.865    23:48:04	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:33.865   23:48:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:33.865    23:48:04	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:33.865   23:48:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:33.865   23:48:04	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:33.866   23:48:04	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:14:33.866   23:48:04	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:34.124  [2024-12-13 23:48:04.749754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:14:34.124  [2024-12-13 23:48:04.751624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:14:34.124  [2024-12-13 23:48:04.751816] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:14:34.124  [2024-12-13 23:48:04.752359] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:14:34.124  [2024-12-13 23:48:04.752658] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:34.124  [2024-12-13 23:48:04.752786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring
00:14:34.124  request:
00:14:34.124  {
00:14:34.124    "name": "raid_bdev1",
00:14:34.124    "raid_level": "concat",
00:14:34.124    "base_bdevs": [
00:14:34.124      "malloc1",
00:14:34.124      "malloc2"
00:14:34.124    ],
00:14:34.124    "superblock": false,
00:14:34.124    "strip_size_kb": 64,
00:14:34.124    "method": "bdev_raid_create",
00:14:34.124    "req_id": 1
00:14:34.124  }
00:14:34.124  Got JSON-RPC error response
00:14:34.124  response:
00:14:34.124  {
00:14:34.124    "code": -17,
00:14:34.124    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:14:34.124  }
00:14:34.124   23:48:04	-- common/autotest_common.sh@653 -- # es=1
00:14:34.124   23:48:04	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:34.124   23:48:04	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:34.124   23:48:04	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:34.124    23:48:04	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:34.124    23:48:04	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:14:34.382   23:48:04	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:14:34.382   23:48:04	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:14:34.382   23:48:04	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:34.382  [2024-12-13 23:48:05.114251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:34.382  [2024-12-13 23:48:05.114571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:34.382  [2024-12-13 23:48:05.114832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:14:34.382  [2024-12-13 23:48:05.115069] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:34.640  [2024-12-13 23:48:05.117519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:34.640  [2024-12-13 23:48:05.117851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:34.640  [2024-12-13 23:48:05.118213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:14:34.640  [2024-12-13 23:48:05.118410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:34.640  pt1
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:34.640    23:48:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:34.640    23:48:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:34.640    "name": "raid_bdev1",
00:14:34.640    "uuid": "4d4080f0-bfce-430c-8e46-dea060c76ccc",
00:14:34.640    "strip_size_kb": 64,
00:14:34.640    "state": "configuring",
00:14:34.640    "raid_level": "concat",
00:14:34.640    "superblock": true,
00:14:34.640    "num_base_bdevs": 2,
00:14:34.640    "num_base_bdevs_discovered": 1,
00:14:34.640    "num_base_bdevs_operational": 2,
00:14:34.640    "base_bdevs_list": [
00:14:34.640      {
00:14:34.640        "name": "pt1",
00:14:34.640        "uuid": "0f0296c9-9158-56c3-b16d-343db05d6ce5",
00:14:34.640        "is_configured": true,
00:14:34.640        "data_offset": 2048,
00:14:34.640        "data_size": 63488
00:14:34.640      },
00:14:34.640      {
00:14:34.640        "name": null,
00:14:34.640        "uuid": "70ec1b88-e69c-5be5-ba66-5610203f357c",
00:14:34.640        "is_configured": false,
00:14:34.640        "data_offset": 2048,
00:14:34.640        "data_size": 63488
00:14:34.640      }
00:14:34.640    ]
00:14:34.640  }'
00:14:34.640   23:48:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:34.640   23:48:05	-- common/autotest_common.sh@10 -- # set +x
00:14:35.244   23:48:05	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:14:35.244   23:48:05	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:14:35.244   23:48:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:35.244   23:48:05	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:35.521  [2024-12-13 23:48:06.094483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:35.521  [2024-12-13 23:48:06.094954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:35.521  [2024-12-13 23:48:06.095254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:14:35.521  [2024-12-13 23:48:06.095509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:35.521  [2024-12-13 23:48:06.096149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:35.521  [2024-12-13 23:48:06.096414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:35.521  [2024-12-13 23:48:06.096757] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:14:35.521  [2024-12-13 23:48:06.096924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:35.521  [2024-12-13 23:48:06.097078] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:14:35.521  [2024-12-13 23:48:06.097175] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:35.521  [2024-12-13 23:48:06.097343] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:14:35.521  [2024-12-13 23:48:06.097826] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:14:35.521  [2024-12-13 23:48:06.097961] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:14:35.521  [2024-12-13 23:48:06.098177] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:35.521  pt2
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:35.521   23:48:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:35.521    23:48:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:35.521    23:48:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:35.779   23:48:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:35.779    "name": "raid_bdev1",
00:14:35.779    "uuid": "4d4080f0-bfce-430c-8e46-dea060c76ccc",
00:14:35.779    "strip_size_kb": 64,
00:14:35.779    "state": "online",
00:14:35.779    "raid_level": "concat",
00:14:35.779    "superblock": true,
00:14:35.779    "num_base_bdevs": 2,
00:14:35.779    "num_base_bdevs_discovered": 2,
00:14:35.779    "num_base_bdevs_operational": 2,
00:14:35.779    "base_bdevs_list": [
00:14:35.779      {
00:14:35.779        "name": "pt1",
00:14:35.779        "uuid": "0f0296c9-9158-56c3-b16d-343db05d6ce5",
00:14:35.779        "is_configured": true,
00:14:35.779        "data_offset": 2048,
00:14:35.779        "data_size": 63488
00:14:35.779      },
00:14:35.779      {
00:14:35.779        "name": "pt2",
00:14:35.779        "uuid": "70ec1b88-e69c-5be5-ba66-5610203f357c",
00:14:35.779        "is_configured": true,
00:14:35.779        "data_offset": 2048,
00:14:35.780        "data_size": 63488
00:14:35.780      }
00:14:35.780    ]
00:14:35.780  }'
00:14:35.780   23:48:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:35.780   23:48:06	-- common/autotest_common.sh@10 -- # set +x
00:14:36.346    23:48:06	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:36.346    23:48:06	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:14:36.605  [2024-12-13 23:48:07.102833] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:36.605   23:48:07	-- bdev/bdev_raid.sh@430 -- # '[' 4d4080f0-bfce-430c-8e46-dea060c76ccc '!=' 4d4080f0-bfce-430c-8e46-dea060c76ccc ']'
00:14:36.605   23:48:07	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:14:36.605   23:48:07	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:36.605   23:48:07	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:36.605   23:48:07	-- bdev/bdev_raid.sh@511 -- # killprocess 113603
00:14:36.605   23:48:07	-- common/autotest_common.sh@936 -- # '[' -z 113603 ']'
00:14:36.605   23:48:07	-- common/autotest_common.sh@940 -- # kill -0 113603
00:14:36.605    23:48:07	-- common/autotest_common.sh@941 -- # uname
00:14:36.605   23:48:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:36.605    23:48:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113603
00:14:36.605  killing process with pid 113603
00:14:36.605   23:48:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:36.605   23:48:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:36.605   23:48:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 113603'
00:14:36.605   23:48:07	-- common/autotest_common.sh@955 -- # kill 113603
00:14:36.605  [2024-12-13 23:48:07.158394] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:36.605   23:48:07	-- common/autotest_common.sh@960 -- # wait 113603
00:14:36.605  [2024-12-13 23:48:07.158455] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:36.605  [2024-12-13 23:48:07.158501] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:36.605  [2024-12-13 23:48:07.158512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:14:36.605  [2024-12-13 23:48:07.293962] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:37.982  ************************************
00:14:37.982  END TEST raid_superblock_test
00:14:37.982  ************************************
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@513 -- # return 0
00:14:37.982  
00:14:37.982  real	0m8.012s
00:14:37.982  user	0m13.512s
00:14:37.982  sys	0m0.991s
00:14:37.982   23:48:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:37.982   23:48:08	-- common/autotest_common.sh@10 -- # set +x
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false
00:14:37.982   23:48:08	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:37.982   23:48:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:37.982   23:48:08	-- common/autotest_common.sh@10 -- # set +x
00:14:37.982  ************************************
00:14:37.982  START TEST raid_state_function_test
00:14:37.982  ************************************
00:14:37.982   23:48:08	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:37.982    23:48:08	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:37.982   23:48:08	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@226 -- # raid_pid=113848
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113848'
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:37.983  Process raid pid: 113848
00:14:37.983   23:48:08	-- bdev/bdev_raid.sh@228 -- # waitforlisten 113848 /var/tmp/spdk-raid.sock
00:14:37.983   23:48:08	-- common/autotest_common.sh@829 -- # '[' -z 113848 ']'
00:14:37.983   23:48:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:37.983   23:48:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:37.983   23:48:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:37.983  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:37.983   23:48:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:37.983   23:48:08	-- common/autotest_common.sh@10 -- # set +x
00:14:37.983  [2024-12-13 23:48:08.459109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:37.983  [2024-12-13 23:48:08.459526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:37.983  [2024-12-13 23:48:08.633380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:38.241  [2024-12-13 23:48:08.873069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:38.500  [2024-12-13 23:48:09.062079] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:38.759   23:48:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:38.759   23:48:09	-- common/autotest_common.sh@862 -- # return 0
00:14:38.759   23:48:09	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:39.018  [2024-12-13 23:48:09.662279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:39.018  [2024-12-13 23:48:09.662509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:39.018  [2024-12-13 23:48:09.662642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:39.018  [2024-12-13 23:48:09.662782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:39.018   23:48:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:39.018    23:48:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:39.018    23:48:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:39.276   23:48:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:39.276    "name": "Existed_Raid",
00:14:39.276    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:39.276    "strip_size_kb": 0,
00:14:39.276    "state": "configuring",
00:14:39.276    "raid_level": "raid1",
00:14:39.276    "superblock": false,
00:14:39.276    "num_base_bdevs": 2,
00:14:39.277    "num_base_bdevs_discovered": 0,
00:14:39.277    "num_base_bdevs_operational": 2,
00:14:39.277    "base_bdevs_list": [
00:14:39.277      {
00:14:39.277        "name": "BaseBdev1",
00:14:39.277        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:39.277        "is_configured": false,
00:14:39.277        "data_offset": 0,
00:14:39.277        "data_size": 0
00:14:39.277      },
00:14:39.277      {
00:14:39.277        "name": "BaseBdev2",
00:14:39.277        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:39.277        "is_configured": false,
00:14:39.277        "data_offset": 0,
00:14:39.277        "data_size": 0
00:14:39.277      }
00:14:39.277    ]
00:14:39.277  }'
00:14:39.277   23:48:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:39.277   23:48:09	-- common/autotest_common.sh@10 -- # set +x
00:14:39.843   23:48:10	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:40.101  [2024-12-13 23:48:10.762349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:40.101  [2024-12-13 23:48:10.762510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:14:40.101   23:48:10	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:40.360  [2024-12-13 23:48:11.026396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:40.360  [2024-12-13 23:48:11.026592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:40.360  [2024-12-13 23:48:11.026719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:40.360  [2024-12-13 23:48:11.026872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:40.360   23:48:11	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:40.640  [2024-12-13 23:48:11.295524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:40.640  BaseBdev1
00:14:40.640   23:48:11	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:40.640   23:48:11	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:40.640   23:48:11	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:40.640   23:48:11	-- common/autotest_common.sh@899 -- # local i
00:14:40.640   23:48:11	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:40.640   23:48:11	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:40.640   23:48:11	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:40.898   23:48:11	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:41.156  [
00:14:41.156    {
00:14:41.156      "name": "BaseBdev1",
00:14:41.156      "aliases": [
00:14:41.156        "325d0bc9-78d6-4702-9344-f8ec23ea93c6"
00:14:41.156      ],
00:14:41.156      "product_name": "Malloc disk",
00:14:41.156      "block_size": 512,
00:14:41.156      "num_blocks": 65536,
00:14:41.156      "uuid": "325d0bc9-78d6-4702-9344-f8ec23ea93c6",
00:14:41.156      "assigned_rate_limits": {
00:14:41.156        "rw_ios_per_sec": 0,
00:14:41.156        "rw_mbytes_per_sec": 0,
00:14:41.156        "r_mbytes_per_sec": 0,
00:14:41.156        "w_mbytes_per_sec": 0
00:14:41.156      },
00:14:41.156      "claimed": true,
00:14:41.156      "claim_type": "exclusive_write",
00:14:41.156      "zoned": false,
00:14:41.156      "supported_io_types": {
00:14:41.156        "read": true,
00:14:41.156        "write": true,
00:14:41.156        "unmap": true,
00:14:41.156        "write_zeroes": true,
00:14:41.156        "flush": true,
00:14:41.156        "reset": true,
00:14:41.156        "compare": false,
00:14:41.156        "compare_and_write": false,
00:14:41.156        "abort": true,
00:14:41.156        "nvme_admin": false,
00:14:41.156        "nvme_io": false
00:14:41.156      },
00:14:41.156      "memory_domains": [
00:14:41.156        {
00:14:41.156          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:41.156          "dma_device_type": 2
00:14:41.156        }
00:14:41.156      ],
00:14:41.156      "driver_specific": {}
00:14:41.156    }
00:14:41.156  ]
00:14:41.156   23:48:11	-- common/autotest_common.sh@905 -- # return 0
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:41.156   23:48:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:41.156    23:48:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:41.156    23:48:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:41.414   23:48:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:41.414    "name": "Existed_Raid",
00:14:41.414    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:41.414    "strip_size_kb": 0,
00:14:41.414    "state": "configuring",
00:14:41.414    "raid_level": "raid1",
00:14:41.414    "superblock": false,
00:14:41.414    "num_base_bdevs": 2,
00:14:41.414    "num_base_bdevs_discovered": 1,
00:14:41.414    "num_base_bdevs_operational": 2,
00:14:41.414    "base_bdevs_list": [
00:14:41.414      {
00:14:41.414        "name": "BaseBdev1",
00:14:41.414        "uuid": "325d0bc9-78d6-4702-9344-f8ec23ea93c6",
00:14:41.414        "is_configured": true,
00:14:41.414        "data_offset": 0,
00:14:41.414        "data_size": 65536
00:14:41.414      },
00:14:41.414      {
00:14:41.414        "name": "BaseBdev2",
00:14:41.414        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:41.414        "is_configured": false,
00:14:41.414        "data_offset": 0,
00:14:41.414        "data_size": 0
00:14:41.414      }
00:14:41.414    ]
00:14:41.414  }'
00:14:41.414   23:48:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:41.414   23:48:11	-- common/autotest_common.sh@10 -- # set +x
00:14:41.980   23:48:12	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:42.238  [2024-12-13 23:48:12.831820] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:42.238  [2024-12-13 23:48:12.831981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:14:42.238   23:48:12	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:14:42.238   23:48:12	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:42.497  [2024-12-13 23:48:13.083916] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:42.497  [2024-12-13 23:48:13.086592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:42.497  [2024-12-13 23:48:13.086789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:42.497   23:48:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:42.497    23:48:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:42.497    23:48:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:42.756   23:48:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:42.756    "name": "Existed_Raid",
00:14:42.756    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:42.756    "strip_size_kb": 0,
00:14:42.756    "state": "configuring",
00:14:42.756    "raid_level": "raid1",
00:14:42.756    "superblock": false,
00:14:42.756    "num_base_bdevs": 2,
00:14:42.756    "num_base_bdevs_discovered": 1,
00:14:42.756    "num_base_bdevs_operational": 2,
00:14:42.756    "base_bdevs_list": [
00:14:42.756      {
00:14:42.756        "name": "BaseBdev1",
00:14:42.756        "uuid": "325d0bc9-78d6-4702-9344-f8ec23ea93c6",
00:14:42.756        "is_configured": true,
00:14:42.756        "data_offset": 0,
00:14:42.756        "data_size": 65536
00:14:42.756      },
00:14:42.756      {
00:14:42.756        "name": "BaseBdev2",
00:14:42.756        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:42.756        "is_configured": false,
00:14:42.756        "data_offset": 0,
00:14:42.756        "data_size": 0
00:14:42.756      }
00:14:42.756    ]
00:14:42.756  }'
00:14:42.756   23:48:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:42.756   23:48:13	-- common/autotest_common.sh@10 -- # set +x
00:14:43.323   23:48:13	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:43.581  [2024-12-13 23:48:14.138210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:43.581  [2024-12-13 23:48:14.138399] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:14:43.581  [2024-12-13 23:48:14.138443] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:14:43.581  [2024-12-13 23:48:14.138642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0
00:14:43.581  [2024-12-13 23:48:14.139112] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:14:43.581  [2024-12-13 23:48:14.139251] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:14:43.581  [2024-12-13 23:48:14.139643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:43.581  BaseBdev2
00:14:43.581   23:48:14	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:43.581   23:48:14	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:43.581   23:48:14	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:43.581   23:48:14	-- common/autotest_common.sh@899 -- # local i
00:14:43.581   23:48:14	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:43.581   23:48:14	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:43.581   23:48:14	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:43.839   23:48:14	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:44.097  [
00:14:44.097    {
00:14:44.097      "name": "BaseBdev2",
00:14:44.097      "aliases": [
00:14:44.097        "718aa0b7-d9cd-413d-bb48-2f0d2f4dcb13"
00:14:44.097      ],
00:14:44.097      "product_name": "Malloc disk",
00:14:44.097      "block_size": 512,
00:14:44.097      "num_blocks": 65536,
00:14:44.097      "uuid": "718aa0b7-d9cd-413d-bb48-2f0d2f4dcb13",
00:14:44.097      "assigned_rate_limits": {
00:14:44.097        "rw_ios_per_sec": 0,
00:14:44.097        "rw_mbytes_per_sec": 0,
00:14:44.097        "r_mbytes_per_sec": 0,
00:14:44.097        "w_mbytes_per_sec": 0
00:14:44.097      },
00:14:44.097      "claimed": true,
00:14:44.097      "claim_type": "exclusive_write",
00:14:44.097      "zoned": false,
00:14:44.097      "supported_io_types": {
00:14:44.097        "read": true,
00:14:44.097        "write": true,
00:14:44.097        "unmap": true,
00:14:44.097        "write_zeroes": true,
00:14:44.097        "flush": true,
00:14:44.097        "reset": true,
00:14:44.097        "compare": false,
00:14:44.097        "compare_and_write": false,
00:14:44.097        "abort": true,
00:14:44.097        "nvme_admin": false,
00:14:44.097        "nvme_io": false
00:14:44.097      },
00:14:44.097      "memory_domains": [
00:14:44.097        {
00:14:44.097          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:44.097          "dma_device_type": 2
00:14:44.097        }
00:14:44.097      ],
00:14:44.097      "driver_specific": {}
00:14:44.097    }
00:14:44.097  ]
00:14:44.097   23:48:14	-- common/autotest_common.sh@905 -- # return 0
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:44.097    23:48:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:44.097    23:48:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:44.097   23:48:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:44.097    "name": "Existed_Raid",
00:14:44.097    "uuid": "e6b58124-e54e-41af-b45c-eb44fe3e1a87",
00:14:44.097    "strip_size_kb": 0,
00:14:44.098    "state": "online",
00:14:44.098    "raid_level": "raid1",
00:14:44.098    "superblock": false,
00:14:44.098    "num_base_bdevs": 2,
00:14:44.098    "num_base_bdevs_discovered": 2,
00:14:44.098    "num_base_bdevs_operational": 2,
00:14:44.098    "base_bdevs_list": [
00:14:44.098      {
00:14:44.098        "name": "BaseBdev1",
00:14:44.098        "uuid": "325d0bc9-78d6-4702-9344-f8ec23ea93c6",
00:14:44.098        "is_configured": true,
00:14:44.098        "data_offset": 0,
00:14:44.098        "data_size": 65536
00:14:44.098      },
00:14:44.098      {
00:14:44.098        "name": "BaseBdev2",
00:14:44.098        "uuid": "718aa0b7-d9cd-413d-bb48-2f0d2f4dcb13",
00:14:44.098        "is_configured": true,
00:14:44.098        "data_offset": 0,
00:14:44.098        "data_size": 65536
00:14:44.098      }
00:14:44.098    ]
00:14:44.098  }'
00:14:44.098   23:48:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:44.098   23:48:14	-- common/autotest_common.sh@10 -- # set +x
00:14:44.663   23:48:15	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:44.922  [2024-12-13 23:48:15.560135] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@196 -- # return 0
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:44.922   23:48:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:44.922    23:48:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:44.922    23:48:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:45.181   23:48:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:45.181    "name": "Existed_Raid",
00:14:45.181    "uuid": "e6b58124-e54e-41af-b45c-eb44fe3e1a87",
00:14:45.181    "strip_size_kb": 0,
00:14:45.181    "state": "online",
00:14:45.181    "raid_level": "raid1",
00:14:45.181    "superblock": false,
00:14:45.181    "num_base_bdevs": 2,
00:14:45.181    "num_base_bdevs_discovered": 1,
00:14:45.181    "num_base_bdevs_operational": 1,
00:14:45.181    "base_bdevs_list": [
00:14:45.181      {
00:14:45.181        "name": null,
00:14:45.181        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:45.181        "is_configured": false,
00:14:45.181        "data_offset": 0,
00:14:45.181        "data_size": 65536
00:14:45.181      },
00:14:45.181      {
00:14:45.181        "name": "BaseBdev2",
00:14:45.181        "uuid": "718aa0b7-d9cd-413d-bb48-2f0d2f4dcb13",
00:14:45.181        "is_configured": true,
00:14:45.181        "data_offset": 0,
00:14:45.181        "data_size": 65536
00:14:45.181      }
00:14:45.181    ]
00:14:45.181  }'
00:14:45.181   23:48:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:45.181   23:48:15	-- common/autotest_common.sh@10 -- # set +x
00:14:45.748   23:48:16	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:45.748   23:48:16	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:45.748    23:48:16	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:45.748    23:48:16	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:46.007   23:48:16	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:46.007   23:48:16	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:46.007   23:48:16	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:46.265  [2024-12-13 23:48:16.870161] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:46.265  [2024-12-13 23:48:16.870313] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:46.265  [2024-12-13 23:48:16.870490] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:46.266  [2024-12-13 23:48:16.937262] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:46.266  [2024-12-13 23:48:16.937494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:14:46.266   23:48:16	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:46.266   23:48:16	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:46.266    23:48:16	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:46.266    23:48:16	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:46.524   23:48:17	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:46.524   23:48:17	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:46.524   23:48:17	-- bdev/bdev_raid.sh@287 -- # killprocess 113848
00:14:46.524   23:48:17	-- common/autotest_common.sh@936 -- # '[' -z 113848 ']'
00:14:46.524   23:48:17	-- common/autotest_common.sh@940 -- # kill -0 113848
00:14:46.524    23:48:17	-- common/autotest_common.sh@941 -- # uname
00:14:46.524   23:48:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:46.524    23:48:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113848
00:14:46.524   23:48:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:46.524   23:48:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:46.524   23:48:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 113848'
00:14:46.524  killing process with pid 113848
00:14:46.524   23:48:17	-- common/autotest_common.sh@955 -- # kill 113848
00:14:46.524  [2024-12-13 23:48:17.171469] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:46.524  [2024-12-13 23:48:17.171725] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:46.524   23:48:17	-- common/autotest_common.sh@960 -- # wait 113848
00:14:47.460  ************************************
00:14:47.460  END TEST raid_state_function_test
00:14:47.460  ************************************
00:14:47.460   23:48:18	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:47.460  
00:14:47.460  real	0m9.806s
00:14:47.460  user	0m16.923s
00:14:47.460  sys	0m1.243s
00:14:47.460   23:48:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:47.460   23:48:18	-- common/autotest_common.sh@10 -- # set +x
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true
00:14:47.719   23:48:18	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:47.719   23:48:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:47.719   23:48:18	-- common/autotest_common.sh@10 -- # set +x
00:14:47.719  ************************************
00:14:47.719  START TEST raid_state_function_test_sb
00:14:47.719  ************************************
00:14:47.719   23:48:18	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:47.719    23:48:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:47.719   23:48:18	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@226 -- # raid_pid=114166
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:47.720  Process raid pid: 114166
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114166'
00:14:47.720   23:48:18	-- bdev/bdev_raid.sh@228 -- # waitforlisten 114166 /var/tmp/spdk-raid.sock
00:14:47.720   23:48:18	-- common/autotest_common.sh@829 -- # '[' -z 114166 ']'
00:14:47.720   23:48:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:47.720   23:48:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:47.720   23:48:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:47.720  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:47.720   23:48:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:47.720   23:48:18	-- common/autotest_common.sh@10 -- # set +x
00:14:47.720  [2024-12-13 23:48:18.328726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:47.720  [2024-12-13 23:48:18.329087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:47.978  [2024-12-13 23:48:18.502170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:48.237  [2024-12-13 23:48:18.729094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:48.237  [2024-12-13 23:48:18.902291] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:48.495   23:48:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:48.495   23:48:19	-- common/autotest_common.sh@862 -- # return 0
00:14:48.495   23:48:19	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:48.754  [2024-12-13 23:48:19.358727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:48.754  [2024-12-13 23:48:19.359007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:48.754  [2024-12-13 23:48:19.359125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:48.754  [2024-12-13 23:48:19.359188] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:48.754   23:48:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:48.754    23:48:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:48.754    23:48:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:49.012   23:48:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:49.012    "name": "Existed_Raid",
00:14:49.012    "uuid": "10a03743-aff0-4312-8d1c-90b43630e7cc",
00:14:49.012    "strip_size_kb": 0,
00:14:49.012    "state": "configuring",
00:14:49.012    "raid_level": "raid1",
00:14:49.012    "superblock": true,
00:14:49.012    "num_base_bdevs": 2,
00:14:49.012    "num_base_bdevs_discovered": 0,
00:14:49.012    "num_base_bdevs_operational": 2,
00:14:49.012    "base_bdevs_list": [
00:14:49.012      {
00:14:49.012        "name": "BaseBdev1",
00:14:49.012        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:49.012        "is_configured": false,
00:14:49.012        "data_offset": 0,
00:14:49.012        "data_size": 0
00:14:49.012      },
00:14:49.012      {
00:14:49.012        "name": "BaseBdev2",
00:14:49.012        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:49.012        "is_configured": false,
00:14:49.012        "data_offset": 0,
00:14:49.012        "data_size": 0
00:14:49.012      }
00:14:49.012    ]
00:14:49.012  }'
00:14:49.012   23:48:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:49.012   23:48:19	-- common/autotest_common.sh@10 -- # set +x
00:14:49.579   23:48:20	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:49.838  [2024-12-13 23:48:20.466801] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:49.838  [2024-12-13 23:48:20.466975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:14:49.838   23:48:20	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:50.096  [2024-12-13 23:48:20.686921] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:50.096  [2024-12-13 23:48:20.687129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:50.096  [2024-12-13 23:48:20.687264] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:50.096  [2024-12-13 23:48:20.687441] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:50.096   23:48:20	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:50.354  [2024-12-13 23:48:20.956622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:50.354  BaseBdev1
00:14:50.354   23:48:20	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:50.354   23:48:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:50.354   23:48:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:50.354   23:48:20	-- common/autotest_common.sh@899 -- # local i
00:14:50.354   23:48:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:50.354   23:48:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:50.354   23:48:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:50.612   23:48:21	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:50.612  [
00:14:50.612    {
00:14:50.612      "name": "BaseBdev1",
00:14:50.612      "aliases": [
00:14:50.612        "6aca9308-28bf-48fc-a56d-51255fd15e7d"
00:14:50.612      ],
00:14:50.612      "product_name": "Malloc disk",
00:14:50.612      "block_size": 512,
00:14:50.612      "num_blocks": 65536,
00:14:50.612      "uuid": "6aca9308-28bf-48fc-a56d-51255fd15e7d",
00:14:50.612      "assigned_rate_limits": {
00:14:50.612        "rw_ios_per_sec": 0,
00:14:50.612        "rw_mbytes_per_sec": 0,
00:14:50.612        "r_mbytes_per_sec": 0,
00:14:50.612        "w_mbytes_per_sec": 0
00:14:50.612      },
00:14:50.612      "claimed": true,
00:14:50.612      "claim_type": "exclusive_write",
00:14:50.612      "zoned": false,
00:14:50.612      "supported_io_types": {
00:14:50.612        "read": true,
00:14:50.612        "write": true,
00:14:50.612        "unmap": true,
00:14:50.612        "write_zeroes": true,
00:14:50.612        "flush": true,
00:14:50.612        "reset": true,
00:14:50.612        "compare": false,
00:14:50.612        "compare_and_write": false,
00:14:50.612        "abort": true,
00:14:50.612        "nvme_admin": false,
00:14:50.612        "nvme_io": false
00:14:50.612      },
00:14:50.612      "memory_domains": [
00:14:50.612        {
00:14:50.612          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:50.612          "dma_device_type": 2
00:14:50.612        }
00:14:50.612      ],
00:14:50.612      "driver_specific": {}
00:14:50.612    }
00:14:50.612  ]
00:14:50.870   23:48:21	-- common/autotest_common.sh@905 -- # return 0
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:50.870    23:48:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:50.870    23:48:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:50.870    "name": "Existed_Raid",
00:14:50.870    "uuid": "e8ef904d-401a-4ba8-890f-081eb4e02121",
00:14:50.870    "strip_size_kb": 0,
00:14:50.870    "state": "configuring",
00:14:50.870    "raid_level": "raid1",
00:14:50.870    "superblock": true,
00:14:50.870    "num_base_bdevs": 2,
00:14:50.870    "num_base_bdevs_discovered": 1,
00:14:50.870    "num_base_bdevs_operational": 2,
00:14:50.870    "base_bdevs_list": [
00:14:50.870      {
00:14:50.870        "name": "BaseBdev1",
00:14:50.870        "uuid": "6aca9308-28bf-48fc-a56d-51255fd15e7d",
00:14:50.870        "is_configured": true,
00:14:50.870        "data_offset": 2048,
00:14:50.870        "data_size": 63488
00:14:50.870      },
00:14:50.870      {
00:14:50.870        "name": "BaseBdev2",
00:14:50.870        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:50.870        "is_configured": false,
00:14:50.870        "data_offset": 0,
00:14:50.870        "data_size": 0
00:14:50.870      }
00:14:50.870    ]
00:14:50.870  }'
00:14:50.870   23:48:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:50.870   23:48:21	-- common/autotest_common.sh@10 -- # set +x
00:14:51.438   23:48:22	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:51.696  [2024-12-13 23:48:22.288873] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:51.696  [2024-12-13 23:48:22.289048] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:14:51.696   23:48:22	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:14:51.696   23:48:22	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:51.955   23:48:22	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:52.213  BaseBdev1
00:14:52.213   23:48:22	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:14:52.213   23:48:22	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:52.213   23:48:22	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:52.213   23:48:22	-- common/autotest_common.sh@899 -- # local i
00:14:52.213   23:48:22	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:52.213   23:48:22	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:52.213   23:48:22	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:52.471   23:48:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:52.471  [
00:14:52.471    {
00:14:52.471      "name": "BaseBdev1",
00:14:52.471      "aliases": [
00:14:52.471        "ec4ed240-f3b7-419b-9eca-ff07e9318063"
00:14:52.471      ],
00:14:52.471      "product_name": "Malloc disk",
00:14:52.471      "block_size": 512,
00:14:52.471      "num_blocks": 65536,
00:14:52.471      "uuid": "ec4ed240-f3b7-419b-9eca-ff07e9318063",
00:14:52.471      "assigned_rate_limits": {
00:14:52.471        "rw_ios_per_sec": 0,
00:14:52.471        "rw_mbytes_per_sec": 0,
00:14:52.471        "r_mbytes_per_sec": 0,
00:14:52.471        "w_mbytes_per_sec": 0
00:14:52.471      },
00:14:52.471      "claimed": false,
00:14:52.471      "zoned": false,
00:14:52.471      "supported_io_types": {
00:14:52.471        "read": true,
00:14:52.471        "write": true,
00:14:52.471        "unmap": true,
00:14:52.471        "write_zeroes": true,
00:14:52.471        "flush": true,
00:14:52.471        "reset": true,
00:14:52.471        "compare": false,
00:14:52.471        "compare_and_write": false,
00:14:52.471        "abort": true,
00:14:52.471        "nvme_admin": false,
00:14:52.471        "nvme_io": false
00:14:52.471      },
00:14:52.471      "memory_domains": [
00:14:52.471        {
00:14:52.471          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:52.471          "dma_device_type": 2
00:14:52.471        }
00:14:52.471      ],
00:14:52.471      "driver_specific": {}
00:14:52.471    }
00:14:52.471  ]
00:14:52.471   23:48:23	-- common/autotest_common.sh@905 -- # return 0
00:14:52.471   23:48:23	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:52.729  [2024-12-13 23:48:23.326575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:52.730  [2024-12-13 23:48:23.328591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:52.730  [2024-12-13 23:48:23.328790] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:52.730   23:48:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:52.730    23:48:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:52.730    23:48:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:52.988   23:48:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:52.988    "name": "Existed_Raid",
00:14:52.988    "uuid": "0b5bad60-0b68-45f2-81b8-c03a9c39b265",
00:14:52.988    "strip_size_kb": 0,
00:14:52.988    "state": "configuring",
00:14:52.988    "raid_level": "raid1",
00:14:52.988    "superblock": true,
00:14:52.988    "num_base_bdevs": 2,
00:14:52.988    "num_base_bdevs_discovered": 1,
00:14:52.988    "num_base_bdevs_operational": 2,
00:14:52.988    "base_bdevs_list": [
00:14:52.988      {
00:14:52.988        "name": "BaseBdev1",
00:14:52.988        "uuid": "ec4ed240-f3b7-419b-9eca-ff07e9318063",
00:14:52.988        "is_configured": true,
00:14:52.988        "data_offset": 2048,
00:14:52.988        "data_size": 63488
00:14:52.988      },
00:14:52.988      {
00:14:52.988        "name": "BaseBdev2",
00:14:52.988        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:52.988        "is_configured": false,
00:14:52.988        "data_offset": 0,
00:14:52.988        "data_size": 0
00:14:52.988      }
00:14:52.988    ]
00:14:52.988  }'
00:14:52.988   23:48:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:52.988   23:48:23	-- common/autotest_common.sh@10 -- # set +x
00:14:53.555   23:48:24	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:53.814  [2024-12-13 23:48:24.303393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:53.814  [2024-12-13 23:48:24.303843] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:14:53.814  BaseBdev2
00:14:53.814  [2024-12-13 23:48:24.305023] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:53.814  [2024-12-13 23:48:24.305244] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:14:53.814  [2024-12-13 23:48:24.305756] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:14:53.814  [2024-12-13 23:48:24.305923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:14:53.814  [2024-12-13 23:48:24.306184] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:53.814   23:48:24	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:53.814   23:48:24	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:53.814   23:48:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:53.814   23:48:24	-- common/autotest_common.sh@899 -- # local i
00:14:53.814   23:48:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:53.814   23:48:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:53.814   23:48:24	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:53.814   23:48:24	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:54.083  [
00:14:54.083    {
00:14:54.083      "name": "BaseBdev2",
00:14:54.083      "aliases": [
00:14:54.083        "dca231d4-7f53-4ce3-ad48-a149d19797cd"
00:14:54.083      ],
00:14:54.083      "product_name": "Malloc disk",
00:14:54.083      "block_size": 512,
00:14:54.083      "num_blocks": 65536,
00:14:54.083      "uuid": "dca231d4-7f53-4ce3-ad48-a149d19797cd",
00:14:54.083      "assigned_rate_limits": {
00:14:54.083        "rw_ios_per_sec": 0,
00:14:54.083        "rw_mbytes_per_sec": 0,
00:14:54.083        "r_mbytes_per_sec": 0,
00:14:54.083        "w_mbytes_per_sec": 0
00:14:54.083      },
00:14:54.083      "claimed": true,
00:14:54.083      "claim_type": "exclusive_write",
00:14:54.083      "zoned": false,
00:14:54.083      "supported_io_types": {
00:14:54.083        "read": true,
00:14:54.083        "write": true,
00:14:54.084        "unmap": true,
00:14:54.084        "write_zeroes": true,
00:14:54.084        "flush": true,
00:14:54.084        "reset": true,
00:14:54.084        "compare": false,
00:14:54.084        "compare_and_write": false,
00:14:54.084        "abort": true,
00:14:54.084        "nvme_admin": false,
00:14:54.084        "nvme_io": false
00:14:54.084      },
00:14:54.084      "memory_domains": [
00:14:54.084        {
00:14:54.084          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:54.084          "dma_device_type": 2
00:14:54.084        }
00:14:54.084      ],
00:14:54.084      "driver_specific": {}
00:14:54.084    }
00:14:54.084  ]
00:14:54.084   23:48:24	-- common/autotest_common.sh@905 -- # return 0
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:54.084   23:48:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:54.084    23:48:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:54.084    23:48:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:54.370   23:48:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:54.370    "name": "Existed_Raid",
00:14:54.370    "uuid": "0b5bad60-0b68-45f2-81b8-c03a9c39b265",
00:14:54.370    "strip_size_kb": 0,
00:14:54.370    "state": "online",
00:14:54.370    "raid_level": "raid1",
00:14:54.370    "superblock": true,
00:14:54.370    "num_base_bdevs": 2,
00:14:54.370    "num_base_bdevs_discovered": 2,
00:14:54.370    "num_base_bdevs_operational": 2,
00:14:54.370    "base_bdevs_list": [
00:14:54.370      {
00:14:54.370        "name": "BaseBdev1",
00:14:54.370        "uuid": "ec4ed240-f3b7-419b-9eca-ff07e9318063",
00:14:54.370        "is_configured": true,
00:14:54.370        "data_offset": 2048,
00:14:54.370        "data_size": 63488
00:14:54.370      },
00:14:54.370      {
00:14:54.370        "name": "BaseBdev2",
00:14:54.370        "uuid": "dca231d4-7f53-4ce3-ad48-a149d19797cd",
00:14:54.370        "is_configured": true,
00:14:54.370        "data_offset": 2048,
00:14:54.370        "data_size": 63488
00:14:54.370      }
00:14:54.370    ]
00:14:54.370  }'
00:14:54.370   23:48:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:54.370   23:48:24	-- common/autotest_common.sh@10 -- # set +x
00:14:54.939   23:48:25	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:55.197  [2024-12-13 23:48:25.802077] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:55.197   23:48:25	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:55.197   23:48:25	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@196 -- # return 0
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:55.198   23:48:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:55.198    23:48:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:55.198    23:48:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:55.456   23:48:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:55.456    "name": "Existed_Raid",
00:14:55.456    "uuid": "0b5bad60-0b68-45f2-81b8-c03a9c39b265",
00:14:55.456    "strip_size_kb": 0,
00:14:55.456    "state": "online",
00:14:55.456    "raid_level": "raid1",
00:14:55.456    "superblock": true,
00:14:55.456    "num_base_bdevs": 2,
00:14:55.456    "num_base_bdevs_discovered": 1,
00:14:55.456    "num_base_bdevs_operational": 1,
00:14:55.456    "base_bdevs_list": [
00:14:55.456      {
00:14:55.456        "name": null,
00:14:55.456        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.456        "is_configured": false,
00:14:55.456        "data_offset": 2048,
00:14:55.456        "data_size": 63488
00:14:55.456      },
00:14:55.456      {
00:14:55.456        "name": "BaseBdev2",
00:14:55.456        "uuid": "dca231d4-7f53-4ce3-ad48-a149d19797cd",
00:14:55.456        "is_configured": true,
00:14:55.456        "data_offset": 2048,
00:14:55.456        "data_size": 63488
00:14:55.456      }
00:14:55.456    ]
00:14:55.456  }'
00:14:55.456   23:48:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:55.456   23:48:26	-- common/autotest_common.sh@10 -- # set +x
00:14:56.024   23:48:26	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:56.024   23:48:26	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:56.024    23:48:26	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:56.024    23:48:26	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:56.282   23:48:26	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:56.282   23:48:26	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:56.282   23:48:26	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:56.540  [2024-12-13 23:48:27.110132] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:56.540  [2024-12-13 23:48:27.110331] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:56.540  [2024-12-13 23:48:27.110546] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:56.540  [2024-12-13 23:48:27.182176] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:56.540  [2024-12-13 23:48:27.182397] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:14:56.540   23:48:27	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:56.540   23:48:27	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:56.540    23:48:27	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:56.540    23:48:27	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:56.799   23:48:27	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:56.800   23:48:27	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:56.800   23:48:27	-- bdev/bdev_raid.sh@287 -- # killprocess 114166
00:14:56.800   23:48:27	-- common/autotest_common.sh@936 -- # '[' -z 114166 ']'
00:14:56.800   23:48:27	-- common/autotest_common.sh@940 -- # kill -0 114166
00:14:56.800    23:48:27	-- common/autotest_common.sh@941 -- # uname
00:14:56.800   23:48:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:56.800    23:48:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114166
00:14:56.800  killing process with pid 114166
00:14:56.800   23:48:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:56.800   23:48:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:56.800   23:48:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 114166'
00:14:56.800   23:48:27	-- common/autotest_common.sh@955 -- # kill 114166
00:14:56.800   23:48:27	-- common/autotest_common.sh@960 -- # wait 114166
00:14:56.800  [2024-12-13 23:48:27.410284] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:56.800  [2024-12-13 23:48:27.410375] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:57.736  ************************************
00:14:57.736  END TEST raid_state_function_test_sb
00:14:57.736  ************************************
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:57.736  
00:14:57.736  real	0m10.080s
00:14:57.736  user	0m17.438s
00:14:57.736  sys	0m1.170s
00:14:57.736   23:48:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:57.736   23:48:28	-- common/autotest_common.sh@10 -- # set +x
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2
00:14:57.736   23:48:28	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:14:57.736   23:48:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:57.736   23:48:28	-- common/autotest_common.sh@10 -- # set +x
00:14:57.736  ************************************
00:14:57.736  START TEST raid_superblock_test
00:14:57.736  ************************************
00:14:57.736   23:48:28	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@357 -- # raid_pid=114488
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:14:57.736   23:48:28	-- bdev/bdev_raid.sh@358 -- # waitforlisten 114488 /var/tmp/spdk-raid.sock
00:14:57.736   23:48:28	-- common/autotest_common.sh@829 -- # '[' -z 114488 ']'
00:14:57.736   23:48:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:57.736   23:48:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:57.736   23:48:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:57.736  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:57.736   23:48:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:57.736   23:48:28	-- common/autotest_common.sh@10 -- # set +x
00:14:57.736  [2024-12-13 23:48:28.458824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:57.736  [2024-12-13 23:48:28.459297] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114488 ]
00:14:57.995  [2024-12-13 23:48:28.629842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:58.254  [2024-12-13 23:48:28.788282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:58.254  [2024-12-13 23:48:28.954696] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:58.820   23:48:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:58.820   23:48:29	-- common/autotest_common.sh@862 -- # return 0
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:58.820   23:48:29	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:14:59.078  malloc1
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:59.078  [2024-12-13 23:48:29.788275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:59.078  [2024-12-13 23:48:29.788879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:59.078  [2024-12-13 23:48:29.789177] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:14:59.078  [2024-12-13 23:48:29.789456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:59.078  [2024-12-13 23:48:29.791865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:59.078  [2024-12-13 23:48:29.792128] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:59.078  pt1
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:59.078   23:48:29	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:14:59.337  malloc2
00:14:59.337   23:48:30	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:59.595  [2024-12-13 23:48:30.263704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:59.595  [2024-12-13 23:48:30.264089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:59.595  [2024-12-13 23:48:30.264368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:14:59.595  [2024-12-13 23:48:30.264643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:59.595  [2024-12-13 23:48:30.266999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:59.595  [2024-12-13 23:48:30.267291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:59.595  pt2
00:14:59.595   23:48:30	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:59.595   23:48:30	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:59.595   23:48:30	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s
00:14:59.854  [2024-12-13 23:48:30.455877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:59.854  [2024-12-13 23:48:30.457731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:59.854  [2024-12-13 23:48:30.458078] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80
00:14:59.854  [2024-12-13 23:48:30.458207] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:59.854  [2024-12-13 23:48:30.458357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:14:59.854  [2024-12-13 23:48:30.458846] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80
00:14:59.854  [2024-12-13 23:48:30.458982] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80
00:14:59.854  [2024-12-13 23:48:30.459205] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:59.854   23:48:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:59.854    23:48:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:59.854    23:48:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:00.112   23:48:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:00.112    "name": "raid_bdev1",
00:15:00.112    "uuid": "8e67b77c-23d7-492e-8984-6f18b5f42e62",
00:15:00.112    "strip_size_kb": 0,
00:15:00.112    "state": "online",
00:15:00.112    "raid_level": "raid1",
00:15:00.112    "superblock": true,
00:15:00.112    "num_base_bdevs": 2,
00:15:00.112    "num_base_bdevs_discovered": 2,
00:15:00.112    "num_base_bdevs_operational": 2,
00:15:00.112    "base_bdevs_list": [
00:15:00.112      {
00:15:00.112        "name": "pt1",
00:15:00.112        "uuid": "2888dbcc-a19a-5fa0-8b6f-89bed893bb37",
00:15:00.112        "is_configured": true,
00:15:00.112        "data_offset": 2048,
00:15:00.112        "data_size": 63488
00:15:00.112      },
00:15:00.112      {
00:15:00.112        "name": "pt2",
00:15:00.112        "uuid": "27514cc9-d2b2-5535-9935-72a16ae4c417",
00:15:00.112        "is_configured": true,
00:15:00.112        "data_offset": 2048,
00:15:00.112        "data_size": 63488
00:15:00.112      }
00:15:00.112    ]
00:15:00.112  }'
00:15:00.112   23:48:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:00.112   23:48:30	-- common/autotest_common.sh@10 -- # set +x
00:15:00.678    23:48:31	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:00.678    23:48:31	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:15:00.937  [2024-12-13 23:48:31.440154] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:00.937   23:48:31	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8e67b77c-23d7-492e-8984-6f18b5f42e62
00:15:00.937   23:48:31	-- bdev/bdev_raid.sh@380 -- # '[' -z 8e67b77c-23d7-492e-8984-6f18b5f42e62 ']'
00:15:00.937   23:48:31	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:15:01.196  [2024-12-13 23:48:31.680034] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:01.196  [2024-12-13 23:48:31.680182] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:01.196  [2024-12-13 23:48:31.680346] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:01.196  [2024-12-13 23:48:31.680547] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:01.196  [2024-12-13 23:48:31.680684] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline
00:15:01.196    23:48:31	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:01.196    23:48:31	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:15:01.196   23:48:31	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:15:01.196   23:48:31	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:15:01.196   23:48:31	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:01.196   23:48:31	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:15:01.454   23:48:32	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:01.454   23:48:32	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:01.713    23:48:32	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:15:01.713    23:48:32	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:15:01.971   23:48:32	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:15:01.971   23:48:32	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:15:01.971   23:48:32	-- common/autotest_common.sh@650 -- # local es=0
00:15:01.971   23:48:32	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:15:01.971   23:48:32	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:01.971   23:48:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:01.971    23:48:32	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:01.971   23:48:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:01.971    23:48:32	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:01.971   23:48:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:01.971   23:48:32	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:01.971   23:48:32	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:15:01.971   23:48:32	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:15:01.971  [2024-12-13 23:48:32.664179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:15:01.971  [2024-12-13 23:48:32.666028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:15:01.971  [2024-12-13 23:48:32.666207] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:15:01.971  [2024-12-13 23:48:32.666718] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:15:01.971  [2024-12-13 23:48:32.666985] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:01.971  [2024-12-13 23:48:32.667090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring
00:15:01.971  request:
00:15:01.971  {
00:15:01.971    "name": "raid_bdev1",
00:15:01.971    "raid_level": "raid1",
00:15:01.971    "base_bdevs": [
00:15:01.971      "malloc1",
00:15:01.971      "malloc2"
00:15:01.971    ],
00:15:01.971    "superblock": false,
00:15:01.971    "method": "bdev_raid_create",
00:15:01.971    "req_id": 1
00:15:01.971  }
00:15:01.971  Got JSON-RPC error response
00:15:01.971  response:
00:15:01.971  {
00:15:01.971    "code": -17,
00:15:01.971    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:15:01.971  }
00:15:01.971   23:48:32	-- common/autotest_common.sh@653 -- # es=1
00:15:01.971   23:48:32	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:01.971   23:48:32	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:01.971   23:48:32	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:01.971    23:48:32	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:15:01.971    23:48:32	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:02.230   23:48:32	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:15:02.230   23:48:32	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:15:02.230   23:48:32	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:02.489  [2024-12-13 23:48:33.084195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:02.489  [2024-12-13 23:48:33.084523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:02.489  [2024-12-13 23:48:33.084786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:15:02.489  [2024-12-13 23:48:33.085042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:02.489  [2024-12-13 23:48:33.087323] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:02.489  [2024-12-13 23:48:33.087621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:02.489  [2024-12-13 23:48:33.087937] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:15:02.489  [2024-12-13 23:48:33.088145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:02.489  pt1
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:02.489   23:48:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:02.489    23:48:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:02.489    23:48:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:02.747   23:48:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:02.747    "name": "raid_bdev1",
00:15:02.747    "uuid": "8e67b77c-23d7-492e-8984-6f18b5f42e62",
00:15:02.747    "strip_size_kb": 0,
00:15:02.747    "state": "configuring",
00:15:02.747    "raid_level": "raid1",
00:15:02.747    "superblock": true,
00:15:02.747    "num_base_bdevs": 2,
00:15:02.747    "num_base_bdevs_discovered": 1,
00:15:02.747    "num_base_bdevs_operational": 2,
00:15:02.747    "base_bdevs_list": [
00:15:02.747      {
00:15:02.747        "name": "pt1",
00:15:02.747        "uuid": "2888dbcc-a19a-5fa0-8b6f-89bed893bb37",
00:15:02.747        "is_configured": true,
00:15:02.747        "data_offset": 2048,
00:15:02.747        "data_size": 63488
00:15:02.747      },
00:15:02.747      {
00:15:02.747        "name": null,
00:15:02.747        "uuid": "27514cc9-d2b2-5535-9935-72a16ae4c417",
00:15:02.747        "is_configured": false,
00:15:02.747        "data_offset": 2048,
00:15:02.747        "data_size": 63488
00:15:02.747      }
00:15:02.747    ]
00:15:02.747  }'
00:15:02.747   23:48:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:02.747   23:48:33	-- common/autotest_common.sh@10 -- # set +x
00:15:03.313   23:48:33	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:15:03.313   23:48:33	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:15:03.313   23:48:33	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:03.313   23:48:33	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:03.571  [2024-12-13 23:48:34.232646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:03.571  [2024-12-13 23:48:34.233121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:03.571  [2024-12-13 23:48:34.233425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:15:03.571  [2024-12-13 23:48:34.233712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:03.571  [2024-12-13 23:48:34.234390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:03.571  [2024-12-13 23:48:34.234664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:03.571  [2024-12-13 23:48:34.234979] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:03.571  [2024-12-13 23:48:34.235149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:03.571  [2024-12-13 23:48:34.235350] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:15:03.571  [2024-12-13 23:48:34.235465] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:15:03.571  [2024-12-13 23:48:34.235644] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:15:03.571  [2024-12-13 23:48:34.236098] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:15:03.571  [2024-12-13 23:48:34.236231] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:15:03.571  [2024-12-13 23:48:34.236444] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:03.571  pt2
00:15:03.571   23:48:34	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:03.571   23:48:34	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:03.571   23:48:34	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:15:03.571   23:48:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:03.572   23:48:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:03.572    23:48:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:03.572    23:48:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:03.830   23:48:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:03.830    "name": "raid_bdev1",
00:15:03.830    "uuid": "8e67b77c-23d7-492e-8984-6f18b5f42e62",
00:15:03.830    "strip_size_kb": 0,
00:15:03.830    "state": "online",
00:15:03.830    "raid_level": "raid1",
00:15:03.830    "superblock": true,
00:15:03.830    "num_base_bdevs": 2,
00:15:03.830    "num_base_bdevs_discovered": 2,
00:15:03.830    "num_base_bdevs_operational": 2,
00:15:03.830    "base_bdevs_list": [
00:15:03.830      {
00:15:03.830        "name": "pt1",
00:15:03.830        "uuid": "2888dbcc-a19a-5fa0-8b6f-89bed893bb37",
00:15:03.830        "is_configured": true,
00:15:03.830        "data_offset": 2048,
00:15:03.830        "data_size": 63488
00:15:03.830      },
00:15:03.830      {
00:15:03.830        "name": "pt2",
00:15:03.830        "uuid": "27514cc9-d2b2-5535-9935-72a16ae4c417",
00:15:03.830        "is_configured": true,
00:15:03.830        "data_offset": 2048,
00:15:03.830        "data_size": 63488
00:15:03.830      }
00:15:03.830    ]
00:15:03.830  }'
00:15:03.830   23:48:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:03.830   23:48:34	-- common/autotest_common.sh@10 -- # set +x
00:15:04.396    23:48:34	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:04.396    23:48:34	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:15:04.654  [2024-12-13 23:48:35.237026] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:04.654   23:48:35	-- bdev/bdev_raid.sh@430 -- # '[' 8e67b77c-23d7-492e-8984-6f18b5f42e62 '!=' 8e67b77c-23d7-492e-8984-6f18b5f42e62 ']'
00:15:04.654   23:48:35	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:15:04.654   23:48:35	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:04.654   23:48:35	-- bdev/bdev_raid.sh@196 -- # return 0
00:15:04.654   23:48:35	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:15:04.912  [2024-12-13 23:48:35.424909] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:04.912    23:48:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:04.912    23:48:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:04.912    "name": "raid_bdev1",
00:15:04.912    "uuid": "8e67b77c-23d7-492e-8984-6f18b5f42e62",
00:15:04.912    "strip_size_kb": 0,
00:15:04.912    "state": "online",
00:15:04.912    "raid_level": "raid1",
00:15:04.912    "superblock": true,
00:15:04.912    "num_base_bdevs": 2,
00:15:04.912    "num_base_bdevs_discovered": 1,
00:15:04.912    "num_base_bdevs_operational": 1,
00:15:04.912    "base_bdevs_list": [
00:15:04.912      {
00:15:04.912        "name": null,
00:15:04.912        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:04.912        "is_configured": false,
00:15:04.912        "data_offset": 2048,
00:15:04.912        "data_size": 63488
00:15:04.912      },
00:15:04.912      {
00:15:04.912        "name": "pt2",
00:15:04.912        "uuid": "27514cc9-d2b2-5535-9935-72a16ae4c417",
00:15:04.912        "is_configured": true,
00:15:04.912        "data_offset": 2048,
00:15:04.912        "data_size": 63488
00:15:04.912      }
00:15:04.912    ]
00:15:04.912  }'
00:15:04.912   23:48:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:04.912   23:48:35	-- common/autotest_common.sh@10 -- # set +x
00:15:05.477   23:48:36	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:15:05.735  [2024-12-13 23:48:36.337032] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:05.735  [2024-12-13 23:48:36.337180] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:05.735  [2024-12-13 23:48:36.337329] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:05.735  [2024-12-13 23:48:36.337482] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:05.735  [2024-12-13 23:48:36.337599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:15:05.735    23:48:36	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:05.735    23:48:36	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@462 -- # i=1
00:15:05.993   23:48:36	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:06.251  [2024-12-13 23:48:36.893142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:06.251  [2024-12-13 23:48:36.893736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:06.251  [2024-12-13 23:48:36.894014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:15:06.251  [2024-12-13 23:48:36.894266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:06.251  [2024-12-13 23:48:36.896574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:06.251  [2024-12-13 23:48:36.896865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:06.251  [2024-12-13 23:48:36.897158] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:06.251  [2024-12-13 23:48:36.897320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:06.251  [2024-12-13 23:48:36.897556] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:15:06.251  [2024-12-13 23:48:36.897678] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:15:06.251  [2024-12-13 23:48:36.897804] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:15:06.251  pt2
00:15:06.251  [2024-12-13 23:48:36.898297] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:15:06.251  [2024-12-13 23:48:36.898403] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:15:06.251  [2024-12-13 23:48:36.898674] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:06.251   23:48:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:06.251    23:48:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:06.251    23:48:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:06.510   23:48:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:06.510    "name": "raid_bdev1",
00:15:06.510    "uuid": "8e67b77c-23d7-492e-8984-6f18b5f42e62",
00:15:06.510    "strip_size_kb": 0,
00:15:06.510    "state": "online",
00:15:06.510    "raid_level": "raid1",
00:15:06.510    "superblock": true,
00:15:06.510    "num_base_bdevs": 2,
00:15:06.510    "num_base_bdevs_discovered": 1,
00:15:06.510    "num_base_bdevs_operational": 1,
00:15:06.510    "base_bdevs_list": [
00:15:06.510      {
00:15:06.510        "name": null,
00:15:06.510        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:06.510        "is_configured": false,
00:15:06.510        "data_offset": 2048,
00:15:06.510        "data_size": 63488
00:15:06.510      },
00:15:06.510      {
00:15:06.510        "name": "pt2",
00:15:06.510        "uuid": "27514cc9-d2b2-5535-9935-72a16ae4c417",
00:15:06.510        "is_configured": true,
00:15:06.510        "data_offset": 2048,
00:15:06.510        "data_size": 63488
00:15:06.510      }
00:15:06.510    ]
00:15:06.510  }'
00:15:06.510   23:48:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:06.510   23:48:37	-- common/autotest_common.sh@10 -- # set +x
00:15:07.076   23:48:37	-- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']'
00:15:07.076    23:48:37	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:07.076    23:48:37	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:15:07.334  [2024-12-13 23:48:37.914025] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:07.334   23:48:37	-- bdev/bdev_raid.sh@506 -- # '[' 8e67b77c-23d7-492e-8984-6f18b5f42e62 '!=' 8e67b77c-23d7-492e-8984-6f18b5f42e62 ']'
00:15:07.334   23:48:37	-- bdev/bdev_raid.sh@511 -- # killprocess 114488
00:15:07.334   23:48:37	-- common/autotest_common.sh@936 -- # '[' -z 114488 ']'
00:15:07.334   23:48:37	-- common/autotest_common.sh@940 -- # kill -0 114488
00:15:07.334    23:48:37	-- common/autotest_common.sh@941 -- # uname
00:15:07.334   23:48:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:07.334    23:48:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114488
00:15:07.334   23:48:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:07.334   23:48:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:07.334   23:48:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 114488'
00:15:07.334  killing process with pid 114488
00:15:07.334   23:48:37	-- common/autotest_common.sh@955 -- # kill 114488
00:15:07.334  [2024-12-13 23:48:37.955222] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:07.334   23:48:37	-- common/autotest_common.sh@960 -- # wait 114488
00:15:07.334  [2024-12-13 23:48:37.955442] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:07.334  [2024-12-13 23:48:37.955686] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:07.334  [2024-12-13 23:48:37.955811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:15:07.593  [2024-12-13 23:48:38.083400] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@513 -- # return 0
00:15:08.528  
00:15:08.528  real	0m10.628s
00:15:08.528  user	0m18.628s
00:15:08.528  sys	0m1.376s
00:15:08.528   23:48:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:08.528   23:48:39	-- common/autotest_common.sh@10 -- # set +x
00:15:08.528  ************************************
00:15:08.528  END TEST raid_superblock_test
00:15:08.528  ************************************
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false
00:15:08.528   23:48:39	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:08.528   23:48:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:08.528   23:48:39	-- common/autotest_common.sh@10 -- # set +x
00:15:08.528  ************************************
00:15:08.528  START TEST raid_state_function_test
00:15:08.528  ************************************
00:15:08.528   23:48:39	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:08.528    23:48:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@226 -- # raid_pid=114833
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114833'
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:08.528  Process raid pid: 114833
00:15:08.528   23:48:39	-- bdev/bdev_raid.sh@228 -- # waitforlisten 114833 /var/tmp/spdk-raid.sock
00:15:08.528   23:48:39	-- common/autotest_common.sh@829 -- # '[' -z 114833 ']'
00:15:08.528   23:48:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:08.528   23:48:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:08.528   23:48:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:08.528  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:08.528   23:48:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:08.528   23:48:39	-- common/autotest_common.sh@10 -- # set +x
00:15:08.528  [2024-12-13 23:48:39.158890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:08.528  [2024-12-13 23:48:39.159365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:08.787  [2024-12-13 23:48:39.330083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:08.787  [2024-12-13 23:48:39.488960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:09.045  [2024-12-13 23:48:39.658300] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:09.611   23:48:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:09.611   23:48:40	-- common/autotest_common.sh@862 -- # return 0
00:15:09.611   23:48:40	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:09.612  [2024-12-13 23:48:40.277242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:09.612  [2024-12-13 23:48:40.278585] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:09.612  [2024-12-13 23:48:40.278733] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:09.612  [2024-12-13 23:48:40.278900] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:09.612  [2024-12-13 23:48:40.279032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:09.612  [2024-12-13 23:48:40.279215] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:09.612   23:48:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:09.612    23:48:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:09.612    23:48:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:09.870   23:48:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:09.870    "name": "Existed_Raid",
00:15:09.870    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:09.870    "strip_size_kb": 64,
00:15:09.870    "state": "configuring",
00:15:09.870    "raid_level": "raid0",
00:15:09.870    "superblock": false,
00:15:09.870    "num_base_bdevs": 3,
00:15:09.870    "num_base_bdevs_discovered": 0,
00:15:09.870    "num_base_bdevs_operational": 3,
00:15:09.870    "base_bdevs_list": [
00:15:09.870      {
00:15:09.870        "name": "BaseBdev1",
00:15:09.870        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:09.870        "is_configured": false,
00:15:09.870        "data_offset": 0,
00:15:09.870        "data_size": 0
00:15:09.870      },
00:15:09.870      {
00:15:09.870        "name": "BaseBdev2",
00:15:09.870        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:09.870        "is_configured": false,
00:15:09.870        "data_offset": 0,
00:15:09.870        "data_size": 0
00:15:09.870      },
00:15:09.870      {
00:15:09.870        "name": "BaseBdev3",
00:15:09.870        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:09.870        "is_configured": false,
00:15:09.870        "data_offset": 0,
00:15:09.870        "data_size": 0
00:15:09.870      }
00:15:09.870    ]
00:15:09.870  }'
00:15:09.870   23:48:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:09.870   23:48:40	-- common/autotest_common.sh@10 -- # set +x
00:15:10.437   23:48:41	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:10.695  [2024-12-13 23:48:41.377307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:10.695  [2024-12-13 23:48:41.377468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:15:10.695   23:48:41	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:10.954  [2024-12-13 23:48:41.625376] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:10.954  [2024-12-13 23:48:41.625802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:10.954  [2024-12-13 23:48:41.625964] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:10.954  [2024-12-13 23:48:41.626131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:10.954  [2024-12-13 23:48:41.626272] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:10.954  [2024-12-13 23:48:41.626439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:10.954   23:48:41	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:11.212  [2024-12-13 23:48:41.887115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:11.212  BaseBdev1
00:15:11.212   23:48:41	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:11.212   23:48:41	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:11.212   23:48:41	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:11.212   23:48:41	-- common/autotest_common.sh@899 -- # local i
00:15:11.212   23:48:41	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:11.212   23:48:41	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:11.212   23:48:41	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:11.470   23:48:42	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:11.728  [
00:15:11.728    {
00:15:11.728      "name": "BaseBdev1",
00:15:11.728      "aliases": [
00:15:11.728        "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b"
00:15:11.728      ],
00:15:11.728      "product_name": "Malloc disk",
00:15:11.728      "block_size": 512,
00:15:11.728      "num_blocks": 65536,
00:15:11.728      "uuid": "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b",
00:15:11.728      "assigned_rate_limits": {
00:15:11.728        "rw_ios_per_sec": 0,
00:15:11.728        "rw_mbytes_per_sec": 0,
00:15:11.728        "r_mbytes_per_sec": 0,
00:15:11.728        "w_mbytes_per_sec": 0
00:15:11.728      },
00:15:11.728      "claimed": true,
00:15:11.728      "claim_type": "exclusive_write",
00:15:11.728      "zoned": false,
00:15:11.728      "supported_io_types": {
00:15:11.728        "read": true,
00:15:11.728        "write": true,
00:15:11.728        "unmap": true,
00:15:11.728        "write_zeroes": true,
00:15:11.728        "flush": true,
00:15:11.728        "reset": true,
00:15:11.728        "compare": false,
00:15:11.728        "compare_and_write": false,
00:15:11.728        "abort": true,
00:15:11.729        "nvme_admin": false,
00:15:11.729        "nvme_io": false
00:15:11.729      },
00:15:11.729      "memory_domains": [
00:15:11.729        {
00:15:11.729          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:11.729          "dma_device_type": 2
00:15:11.729        }
00:15:11.729      ],
00:15:11.729      "driver_specific": {}
00:15:11.729    }
00:15:11.729  ]
00:15:11.729   23:48:42	-- common/autotest_common.sh@905 -- # return 0
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:11.729    23:48:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:11.729    23:48:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:11.729    "name": "Existed_Raid",
00:15:11.729    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:11.729    "strip_size_kb": 64,
00:15:11.729    "state": "configuring",
00:15:11.729    "raid_level": "raid0",
00:15:11.729    "superblock": false,
00:15:11.729    "num_base_bdevs": 3,
00:15:11.729    "num_base_bdevs_discovered": 1,
00:15:11.729    "num_base_bdevs_operational": 3,
00:15:11.729    "base_bdevs_list": [
00:15:11.729      {
00:15:11.729        "name": "BaseBdev1",
00:15:11.729        "uuid": "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b",
00:15:11.729        "is_configured": true,
00:15:11.729        "data_offset": 0,
00:15:11.729        "data_size": 65536
00:15:11.729      },
00:15:11.729      {
00:15:11.729        "name": "BaseBdev2",
00:15:11.729        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:11.729        "is_configured": false,
00:15:11.729        "data_offset": 0,
00:15:11.729        "data_size": 0
00:15:11.729      },
00:15:11.729      {
00:15:11.729        "name": "BaseBdev3",
00:15:11.729        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:11.729        "is_configured": false,
00:15:11.729        "data_offset": 0,
00:15:11.729        "data_size": 0
00:15:11.729      }
00:15:11.729    ]
00:15:11.729  }'
00:15:11.729   23:48:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:11.729   23:48:42	-- common/autotest_common.sh@10 -- # set +x
00:15:12.296   23:48:42	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:12.554  [2024-12-13 23:48:43.243394] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:12.554  [2024-12-13 23:48:43.243570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:15:12.554   23:48:43	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:15:12.554   23:48:43	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:12.812  [2024-12-13 23:48:43.495496] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:12.812  [2024-12-13 23:48:43.497307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:12.812  [2024-12-13 23:48:43.497818] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:12.812  [2024-12-13 23:48:43.497968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:12.812  [2024-12-13 23:48:43.498156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:12.812   23:48:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:12.812    23:48:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:12.812    23:48:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:13.072   23:48:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:13.072    "name": "Existed_Raid",
00:15:13.072    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:13.072    "strip_size_kb": 64,
00:15:13.072    "state": "configuring",
00:15:13.072    "raid_level": "raid0",
00:15:13.072    "superblock": false,
00:15:13.072    "num_base_bdevs": 3,
00:15:13.072    "num_base_bdevs_discovered": 1,
00:15:13.072    "num_base_bdevs_operational": 3,
00:15:13.072    "base_bdevs_list": [
00:15:13.072      {
00:15:13.072        "name": "BaseBdev1",
00:15:13.072        "uuid": "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b",
00:15:13.072        "is_configured": true,
00:15:13.072        "data_offset": 0,
00:15:13.072        "data_size": 65536
00:15:13.072      },
00:15:13.072      {
00:15:13.072        "name": "BaseBdev2",
00:15:13.072        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:13.072        "is_configured": false,
00:15:13.072        "data_offset": 0,
00:15:13.072        "data_size": 0
00:15:13.072      },
00:15:13.072      {
00:15:13.072        "name": "BaseBdev3",
00:15:13.072        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:13.072        "is_configured": false,
00:15:13.072        "data_offset": 0,
00:15:13.072        "data_size": 0
00:15:13.072      }
00:15:13.072    ]
00:15:13.072  }'
00:15:13.072   23:48:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:13.072   23:48:43	-- common/autotest_common.sh@10 -- # set +x
00:15:13.669   23:48:44	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:13.928  [2024-12-13 23:48:44.573189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:13.928  BaseBdev2
00:15:13.928   23:48:44	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:13.928   23:48:44	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:13.928   23:48:44	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:13.928   23:48:44	-- common/autotest_common.sh@899 -- # local i
00:15:13.928   23:48:44	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:13.928   23:48:44	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:13.928   23:48:44	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:14.186   23:48:44	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:14.451  [
00:15:14.451    {
00:15:14.451      "name": "BaseBdev2",
00:15:14.451      "aliases": [
00:15:14.451        "d9732069-ae38-4b78-a422-882fbb9d5234"
00:15:14.451      ],
00:15:14.451      "product_name": "Malloc disk",
00:15:14.451      "block_size": 512,
00:15:14.451      "num_blocks": 65536,
00:15:14.451      "uuid": "d9732069-ae38-4b78-a422-882fbb9d5234",
00:15:14.451      "assigned_rate_limits": {
00:15:14.451        "rw_ios_per_sec": 0,
00:15:14.451        "rw_mbytes_per_sec": 0,
00:15:14.451        "r_mbytes_per_sec": 0,
00:15:14.451        "w_mbytes_per_sec": 0
00:15:14.451      },
00:15:14.451      "claimed": true,
00:15:14.451      "claim_type": "exclusive_write",
00:15:14.451      "zoned": false,
00:15:14.451      "supported_io_types": {
00:15:14.451        "read": true,
00:15:14.451        "write": true,
00:15:14.451        "unmap": true,
00:15:14.451        "write_zeroes": true,
00:15:14.451        "flush": true,
00:15:14.451        "reset": true,
00:15:14.451        "compare": false,
00:15:14.451        "compare_and_write": false,
00:15:14.451        "abort": true,
00:15:14.451        "nvme_admin": false,
00:15:14.451        "nvme_io": false
00:15:14.451      },
00:15:14.451      "memory_domains": [
00:15:14.451        {
00:15:14.451          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:14.451          "dma_device_type": 2
00:15:14.451        }
00:15:14.451      ],
00:15:14.451      "driver_specific": {}
00:15:14.451    }
00:15:14.451  ]
00:15:14.451   23:48:45	-- common/autotest_common.sh@905 -- # return 0
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:14.451   23:48:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:14.451    23:48:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:14.452    23:48:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:14.710   23:48:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:14.710    "name": "Existed_Raid",
00:15:14.710    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:14.710    "strip_size_kb": 64,
00:15:14.710    "state": "configuring",
00:15:14.710    "raid_level": "raid0",
00:15:14.710    "superblock": false,
00:15:14.710    "num_base_bdevs": 3,
00:15:14.710    "num_base_bdevs_discovered": 2,
00:15:14.710    "num_base_bdevs_operational": 3,
00:15:14.710    "base_bdevs_list": [
00:15:14.710      {
00:15:14.710        "name": "BaseBdev1",
00:15:14.710        "uuid": "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b",
00:15:14.710        "is_configured": true,
00:15:14.710        "data_offset": 0,
00:15:14.710        "data_size": 65536
00:15:14.710      },
00:15:14.710      {
00:15:14.710        "name": "BaseBdev2",
00:15:14.710        "uuid": "d9732069-ae38-4b78-a422-882fbb9d5234",
00:15:14.710        "is_configured": true,
00:15:14.710        "data_offset": 0,
00:15:14.710        "data_size": 65536
00:15:14.710      },
00:15:14.710      {
00:15:14.710        "name": "BaseBdev3",
00:15:14.710        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:14.710        "is_configured": false,
00:15:14.710        "data_offset": 0,
00:15:14.710        "data_size": 0
00:15:14.710      }
00:15:14.710    ]
00:15:14.710  }'
00:15:14.710   23:48:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:14.710   23:48:45	-- common/autotest_common.sh@10 -- # set +x
00:15:15.276   23:48:45	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:15.534  [2024-12-13 23:48:46.030425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:15.534  [2024-12-13 23:48:46.030607] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:15:15.534  [2024-12-13 23:48:46.030654] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:15:15.534  [2024-12-13 23:48:46.030857] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:15:15.534  [2024-12-13 23:48:46.031343] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:15:15.534  [2024-12-13 23:48:46.031498] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:15:15.534  [2024-12-13 23:48:46.031861] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:15.534  BaseBdev3
00:15:15.534   23:48:46	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:15.534   23:48:46	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:15.534   23:48:46	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:15.534   23:48:46	-- common/autotest_common.sh@899 -- # local i
00:15:15.534   23:48:46	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:15.534   23:48:46	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:15.534   23:48:46	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:15.534   23:48:46	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:15.792  [
00:15:15.792    {
00:15:15.792      "name": "BaseBdev3",
00:15:15.792      "aliases": [
00:15:15.792        "5c90eca9-377e-4070-9b0a-a17cfd5f146d"
00:15:15.792      ],
00:15:15.792      "product_name": "Malloc disk",
00:15:15.792      "block_size": 512,
00:15:15.792      "num_blocks": 65536,
00:15:15.792      "uuid": "5c90eca9-377e-4070-9b0a-a17cfd5f146d",
00:15:15.792      "assigned_rate_limits": {
00:15:15.792        "rw_ios_per_sec": 0,
00:15:15.792        "rw_mbytes_per_sec": 0,
00:15:15.792        "r_mbytes_per_sec": 0,
00:15:15.792        "w_mbytes_per_sec": 0
00:15:15.792      },
00:15:15.792      "claimed": true,
00:15:15.792      "claim_type": "exclusive_write",
00:15:15.792      "zoned": false,
00:15:15.792      "supported_io_types": {
00:15:15.792        "read": true,
00:15:15.792        "write": true,
00:15:15.792        "unmap": true,
00:15:15.792        "write_zeroes": true,
00:15:15.792        "flush": true,
00:15:15.792        "reset": true,
00:15:15.792        "compare": false,
00:15:15.792        "compare_and_write": false,
00:15:15.792        "abort": true,
00:15:15.792        "nvme_admin": false,
00:15:15.792        "nvme_io": false
00:15:15.792      },
00:15:15.792      "memory_domains": [
00:15:15.792        {
00:15:15.792          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:15.792          "dma_device_type": 2
00:15:15.792        }
00:15:15.792      ],
00:15:15.792      "driver_specific": {}
00:15:15.792    }
00:15:15.792  ]
00:15:15.792   23:48:46	-- common/autotest_common.sh@905 -- # return 0
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:15.792   23:48:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:15.792    23:48:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:15.792    23:48:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:16.050   23:48:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:16.050    "name": "Existed_Raid",
00:15:16.050    "uuid": "760d878c-2e6f-447c-adee-c157b2b9057d",
00:15:16.050    "strip_size_kb": 64,
00:15:16.050    "state": "online",
00:15:16.050    "raid_level": "raid0",
00:15:16.050    "superblock": false,
00:15:16.050    "num_base_bdevs": 3,
00:15:16.050    "num_base_bdevs_discovered": 3,
00:15:16.050    "num_base_bdevs_operational": 3,
00:15:16.050    "base_bdevs_list": [
00:15:16.050      {
00:15:16.050        "name": "BaseBdev1",
00:15:16.050        "uuid": "ca8314eb-3fe9-40a6-b1f5-759b4b9cdb0b",
00:15:16.050        "is_configured": true,
00:15:16.050        "data_offset": 0,
00:15:16.050        "data_size": 65536
00:15:16.050      },
00:15:16.050      {
00:15:16.050        "name": "BaseBdev2",
00:15:16.050        "uuid": "d9732069-ae38-4b78-a422-882fbb9d5234",
00:15:16.050        "is_configured": true,
00:15:16.050        "data_offset": 0,
00:15:16.050        "data_size": 65536
00:15:16.050      },
00:15:16.050      {
00:15:16.050        "name": "BaseBdev3",
00:15:16.050        "uuid": "5c90eca9-377e-4070-9b0a-a17cfd5f146d",
00:15:16.050        "is_configured": true,
00:15:16.050        "data_offset": 0,
00:15:16.050        "data_size": 65536
00:15:16.050      }
00:15:16.050    ]
00:15:16.050  }'
00:15:16.050   23:48:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:16.050   23:48:46	-- common/autotest_common.sh@10 -- # set +x
00:15:16.616   23:48:47	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:16.875  [2024-12-13 23:48:47.430770] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:16.875  [2024-12-13 23:48:47.430921] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:16.875  [2024-12-13 23:48:47.431081] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:16.875   23:48:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:16.875    23:48:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:16.875    23:48:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:17.133   23:48:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:17.133    "name": "Existed_Raid",
00:15:17.133    "uuid": "760d878c-2e6f-447c-adee-c157b2b9057d",
00:15:17.133    "strip_size_kb": 64,
00:15:17.133    "state": "offline",
00:15:17.133    "raid_level": "raid0",
00:15:17.133    "superblock": false,
00:15:17.133    "num_base_bdevs": 3,
00:15:17.133    "num_base_bdevs_discovered": 2,
00:15:17.133    "num_base_bdevs_operational": 2,
00:15:17.133    "base_bdevs_list": [
00:15:17.133      {
00:15:17.133        "name": null,
00:15:17.133        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:17.133        "is_configured": false,
00:15:17.133        "data_offset": 0,
00:15:17.133        "data_size": 65536
00:15:17.133      },
00:15:17.133      {
00:15:17.133        "name": "BaseBdev2",
00:15:17.133        "uuid": "d9732069-ae38-4b78-a422-882fbb9d5234",
00:15:17.133        "is_configured": true,
00:15:17.133        "data_offset": 0,
00:15:17.133        "data_size": 65536
00:15:17.133      },
00:15:17.133      {
00:15:17.133        "name": "BaseBdev3",
00:15:17.133        "uuid": "5c90eca9-377e-4070-9b0a-a17cfd5f146d",
00:15:17.133        "is_configured": true,
00:15:17.133        "data_offset": 0,
00:15:17.133        "data_size": 65536
00:15:17.133      }
00:15:17.133    ]
00:15:17.133  }'
00:15:17.133   23:48:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:17.133   23:48:47	-- common/autotest_common.sh@10 -- # set +x
00:15:17.699   23:48:48	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:17.699   23:48:48	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:17.699    23:48:48	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:17.699    23:48:48	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:17.956   23:48:48	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:17.956   23:48:48	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:17.956   23:48:48	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:18.214  [2024-12-13 23:48:48.839813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:18.214   23:48:48	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:18.214   23:48:48	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:18.214    23:48:48	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:18.214    23:48:48	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:18.472   23:48:49	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:18.472   23:48:49	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:18.472   23:48:49	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:18.731  [2024-12-13 23:48:49.401709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:18.731  [2024-12-13 23:48:49.401891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:15:18.989   23:48:49	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:18.989   23:48:49	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:18.989    23:48:49	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:18.989    23:48:49	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:18.989   23:48:49	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:18.989   23:48:49	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:18.989   23:48:49	-- bdev/bdev_raid.sh@287 -- # killprocess 114833
00:15:18.989   23:48:49	-- common/autotest_common.sh@936 -- # '[' -z 114833 ']'
00:15:18.989   23:48:49	-- common/autotest_common.sh@940 -- # kill -0 114833
00:15:18.989    23:48:49	-- common/autotest_common.sh@941 -- # uname
00:15:19.248   23:48:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:19.248    23:48:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114833
00:15:19.248   23:48:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:19.248   23:48:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:19.248   23:48:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 114833'
00:15:19.248  killing process with pid 114833
00:15:19.248   23:48:49	-- common/autotest_common.sh@955 -- # kill 114833
00:15:19.248  [2024-12-13 23:48:49.746138] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:19.248   23:48:49	-- common/autotest_common.sh@960 -- # wait 114833
00:15:19.248  [2024-12-13 23:48:49.746403] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:20.183  ************************************
00:15:20.183  END TEST raid_state_function_test
00:15:20.183  ************************************
00:15:20.183   23:48:50	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:20.183  
00:15:20.183  real	0m11.592s
00:15:20.183  user	0m20.538s
00:15:20.183  sys	0m1.330s
00:15:20.183   23:48:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:20.183   23:48:50	-- common/autotest_common.sh@10 -- # set +x
00:15:20.183   23:48:50	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true
00:15:20.183   23:48:50	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:20.183   23:48:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:20.183   23:48:50	-- common/autotest_common.sh@10 -- # set +x
00:15:20.183  ************************************
00:15:20.183  START TEST raid_state_function_test_sb
00:15:20.183  ************************************
00:15:20.184   23:48:50	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:20.184    23:48:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@226 -- # raid_pid=115210
00:15:20.184  Process raid pid: 115210
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115210'
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@228 -- # waitforlisten 115210 /var/tmp/spdk-raid.sock
00:15:20.184   23:48:50	-- common/autotest_common.sh@829 -- # '[' -z 115210 ']'
00:15:20.184   23:48:50	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:20.184   23:48:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:20.184   23:48:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:20.184  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:20.184   23:48:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:20.184   23:48:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:20.184   23:48:50	-- common/autotest_common.sh@10 -- # set +x
00:15:20.184  [2024-12-13 23:48:50.809189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:20.184  [2024-12-13 23:48:50.809394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:20.443  [2024-12-13 23:48:50.978855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:20.443  [2024-12-13 23:48:51.142338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:20.701  [2024-12-13 23:48:51.313282] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:21.268   23:48:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:21.268   23:48:51	-- common/autotest_common.sh@862 -- # return 0
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:21.268  [2024-12-13 23:48:51.862253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:21.268  [2024-12-13 23:48:51.862674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:21.268  [2024-12-13 23:48:51.862694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:21.268  [2024-12-13 23:48:51.862808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:21.268  [2024-12-13 23:48:51.862824] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:21.268  [2024-12-13 23:48:51.862959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:21.268   23:48:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:21.268    23:48:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:21.268    23:48:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:21.527   23:48:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:21.527    "name": "Existed_Raid",
00:15:21.527    "uuid": "913875ed-2afe-4715-b851-4843934e339b",
00:15:21.527    "strip_size_kb": 64,
00:15:21.527    "state": "configuring",
00:15:21.527    "raid_level": "raid0",
00:15:21.527    "superblock": true,
00:15:21.527    "num_base_bdevs": 3,
00:15:21.527    "num_base_bdevs_discovered": 0,
00:15:21.527    "num_base_bdevs_operational": 3,
00:15:21.527    "base_bdevs_list": [
00:15:21.527      {
00:15:21.527        "name": "BaseBdev1",
00:15:21.527        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:21.527        "is_configured": false,
00:15:21.527        "data_offset": 0,
00:15:21.527        "data_size": 0
00:15:21.527      },
00:15:21.527      {
00:15:21.527        "name": "BaseBdev2",
00:15:21.527        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:21.527        "is_configured": false,
00:15:21.527        "data_offset": 0,
00:15:21.527        "data_size": 0
00:15:21.527      },
00:15:21.527      {
00:15:21.527        "name": "BaseBdev3",
00:15:21.527        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:21.527        "is_configured": false,
00:15:21.527        "data_offset": 0,
00:15:21.527        "data_size": 0
00:15:21.527      }
00:15:21.527    ]
00:15:21.527  }'
00:15:21.527   23:48:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:21.527   23:48:52	-- common/autotest_common.sh@10 -- # set +x
00:15:22.094   23:48:52	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:22.353  [2024-12-13 23:48:52.926301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:22.353  [2024-12-13 23:48:52.926339] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:15:22.353   23:48:52	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:22.612  [2024-12-13 23:48:53.114375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:22.612  [2024-12-13 23:48:53.114434] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:22.612  [2024-12-13 23:48:53.114446] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:22.612  [2024-12-13 23:48:53.114473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:22.612  [2024-12-13 23:48:53.114481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:22.612  [2024-12-13 23:48:53.114503] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:22.612   23:48:53	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:22.612  [2024-12-13 23:48:53.327834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:22.612  BaseBdev1
00:15:22.871   23:48:53	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:22.871   23:48:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:22.871   23:48:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:22.871   23:48:53	-- common/autotest_common.sh@899 -- # local i
00:15:22.871   23:48:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:22.871   23:48:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:22.871   23:48:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:22.871   23:48:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:23.130  [
00:15:23.130    {
00:15:23.130      "name": "BaseBdev1",
00:15:23.130      "aliases": [
00:15:23.130        "d3a26d2e-94d6-4bf4-8c68-986de6aabb20"
00:15:23.130      ],
00:15:23.130      "product_name": "Malloc disk",
00:15:23.130      "block_size": 512,
00:15:23.130      "num_blocks": 65536,
00:15:23.130      "uuid": "d3a26d2e-94d6-4bf4-8c68-986de6aabb20",
00:15:23.130      "assigned_rate_limits": {
00:15:23.130        "rw_ios_per_sec": 0,
00:15:23.130        "rw_mbytes_per_sec": 0,
00:15:23.130        "r_mbytes_per_sec": 0,
00:15:23.130        "w_mbytes_per_sec": 0
00:15:23.130      },
00:15:23.130      "claimed": true,
00:15:23.130      "claim_type": "exclusive_write",
00:15:23.130      "zoned": false,
00:15:23.130      "supported_io_types": {
00:15:23.130        "read": true,
00:15:23.130        "write": true,
00:15:23.130        "unmap": true,
00:15:23.130        "write_zeroes": true,
00:15:23.130        "flush": true,
00:15:23.130        "reset": true,
00:15:23.130        "compare": false,
00:15:23.130        "compare_and_write": false,
00:15:23.130        "abort": true,
00:15:23.130        "nvme_admin": false,
00:15:23.130        "nvme_io": false
00:15:23.130      },
00:15:23.130      "memory_domains": [
00:15:23.130        {
00:15:23.130          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:23.130          "dma_device_type": 2
00:15:23.130        }
00:15:23.130      ],
00:15:23.130      "driver_specific": {}
00:15:23.130    }
00:15:23.130  ]
00:15:23.130   23:48:53	-- common/autotest_common.sh@905 -- # return 0
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:23.130   23:48:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:23.130    23:48:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:23.130    23:48:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:23.388   23:48:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:23.388    "name": "Existed_Raid",
00:15:23.388    "uuid": "cb72f028-2215-4016-a0b2-a8846da45601",
00:15:23.388    "strip_size_kb": 64,
00:15:23.388    "state": "configuring",
00:15:23.388    "raid_level": "raid0",
00:15:23.388    "superblock": true,
00:15:23.388    "num_base_bdevs": 3,
00:15:23.388    "num_base_bdevs_discovered": 1,
00:15:23.388    "num_base_bdevs_operational": 3,
00:15:23.388    "base_bdevs_list": [
00:15:23.388      {
00:15:23.388        "name": "BaseBdev1",
00:15:23.389        "uuid": "d3a26d2e-94d6-4bf4-8c68-986de6aabb20",
00:15:23.389        "is_configured": true,
00:15:23.389        "data_offset": 2048,
00:15:23.389        "data_size": 63488
00:15:23.389      },
00:15:23.389      {
00:15:23.389        "name": "BaseBdev2",
00:15:23.389        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:23.389        "is_configured": false,
00:15:23.389        "data_offset": 0,
00:15:23.389        "data_size": 0
00:15:23.389      },
00:15:23.389      {
00:15:23.389        "name": "BaseBdev3",
00:15:23.389        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:23.389        "is_configured": false,
00:15:23.389        "data_offset": 0,
00:15:23.389        "data_size": 0
00:15:23.389      }
00:15:23.389    ]
00:15:23.389  }'
00:15:23.389   23:48:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:23.389   23:48:54	-- common/autotest_common.sh@10 -- # set +x
00:15:23.955   23:48:54	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:24.214  [2024-12-13 23:48:54.768093] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:24.214  [2024-12-13 23:48:54.768150] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:15:24.214   23:48:54	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:15:24.214   23:48:54	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:24.473   23:48:55	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:24.731  BaseBdev1
00:15:24.731   23:48:55	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:15:24.731   23:48:55	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:24.731   23:48:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:24.731   23:48:55	-- common/autotest_common.sh@899 -- # local i
00:15:24.731   23:48:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:24.731   23:48:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:24.731   23:48:55	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:24.989   23:48:55	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:25.247  [
00:15:25.247    {
00:15:25.247      "name": "BaseBdev1",
00:15:25.247      "aliases": [
00:15:25.247        "d96a5bbf-4597-49cc-b9f6-1112d23e8ee9"
00:15:25.247      ],
00:15:25.247      "product_name": "Malloc disk",
00:15:25.247      "block_size": 512,
00:15:25.247      "num_blocks": 65536,
00:15:25.247      "uuid": "d96a5bbf-4597-49cc-b9f6-1112d23e8ee9",
00:15:25.247      "assigned_rate_limits": {
00:15:25.247        "rw_ios_per_sec": 0,
00:15:25.247        "rw_mbytes_per_sec": 0,
00:15:25.247        "r_mbytes_per_sec": 0,
00:15:25.247        "w_mbytes_per_sec": 0
00:15:25.247      },
00:15:25.247      "claimed": false,
00:15:25.247      "zoned": false,
00:15:25.247      "supported_io_types": {
00:15:25.247        "read": true,
00:15:25.247        "write": true,
00:15:25.247        "unmap": true,
00:15:25.247        "write_zeroes": true,
00:15:25.247        "flush": true,
00:15:25.247        "reset": true,
00:15:25.247        "compare": false,
00:15:25.247        "compare_and_write": false,
00:15:25.247        "abort": true,
00:15:25.247        "nvme_admin": false,
00:15:25.247        "nvme_io": false
00:15:25.247      },
00:15:25.247      "memory_domains": [
00:15:25.247        {
00:15:25.247          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:25.247          "dma_device_type": 2
00:15:25.247        }
00:15:25.247      ],
00:15:25.247      "driver_specific": {}
00:15:25.247    }
00:15:25.247  ]
00:15:25.247   23:48:55	-- common/autotest_common.sh@905 -- # return 0
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:25.247  [2024-12-13 23:48:55.938765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:25.247  [2024-12-13 23:48:55.940608] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:25.247  [2024-12-13 23:48:55.940667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:25.247  [2024-12-13 23:48:55.940679] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:25.247  [2024-12-13 23:48:55.940704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:25.247   23:48:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:25.247    23:48:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:25.247    23:48:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:25.505   23:48:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:25.505    "name": "Existed_Raid",
00:15:25.505    "uuid": "4b3a0298-df8b-4a16-8ad3-312020478b9d",
00:15:25.505    "strip_size_kb": 64,
00:15:25.505    "state": "configuring",
00:15:25.505    "raid_level": "raid0",
00:15:25.505    "superblock": true,
00:15:25.505    "num_base_bdevs": 3,
00:15:25.505    "num_base_bdevs_discovered": 1,
00:15:25.505    "num_base_bdevs_operational": 3,
00:15:25.505    "base_bdevs_list": [
00:15:25.505      {
00:15:25.505        "name": "BaseBdev1",
00:15:25.505        "uuid": "d96a5bbf-4597-49cc-b9f6-1112d23e8ee9",
00:15:25.505        "is_configured": true,
00:15:25.505        "data_offset": 2048,
00:15:25.505        "data_size": 63488
00:15:25.505      },
00:15:25.505      {
00:15:25.505        "name": "BaseBdev2",
00:15:25.505        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:25.505        "is_configured": false,
00:15:25.505        "data_offset": 0,
00:15:25.505        "data_size": 0
00:15:25.505      },
00:15:25.505      {
00:15:25.505        "name": "BaseBdev3",
00:15:25.505        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:25.505        "is_configured": false,
00:15:25.505        "data_offset": 0,
00:15:25.505        "data_size": 0
00:15:25.505      }
00:15:25.505    ]
00:15:25.505  }'
00:15:25.505   23:48:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:25.505   23:48:56	-- common/autotest_common.sh@10 -- # set +x
00:15:26.071   23:48:56	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:26.638  [2024-12-13 23:48:57.079280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:26.638  BaseBdev2
00:15:26.638   23:48:57	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:26.638   23:48:57	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:26.638   23:48:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:26.638   23:48:57	-- common/autotest_common.sh@899 -- # local i
00:15:26.638   23:48:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:26.638   23:48:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:26.638   23:48:57	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:26.638   23:48:57	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:26.897  [
00:15:26.897    {
00:15:26.897      "name": "BaseBdev2",
00:15:26.897      "aliases": [
00:15:26.897        "0e961079-dcac-4c1e-8672-ee89fde70911"
00:15:26.897      ],
00:15:26.897      "product_name": "Malloc disk",
00:15:26.897      "block_size": 512,
00:15:26.897      "num_blocks": 65536,
00:15:26.897      "uuid": "0e961079-dcac-4c1e-8672-ee89fde70911",
00:15:26.897      "assigned_rate_limits": {
00:15:26.897        "rw_ios_per_sec": 0,
00:15:26.897        "rw_mbytes_per_sec": 0,
00:15:26.897        "r_mbytes_per_sec": 0,
00:15:26.897        "w_mbytes_per_sec": 0
00:15:26.897      },
00:15:26.897      "claimed": true,
00:15:26.897      "claim_type": "exclusive_write",
00:15:26.897      "zoned": false,
00:15:26.897      "supported_io_types": {
00:15:26.897        "read": true,
00:15:26.897        "write": true,
00:15:26.897        "unmap": true,
00:15:26.897        "write_zeroes": true,
00:15:26.897        "flush": true,
00:15:26.897        "reset": true,
00:15:26.897        "compare": false,
00:15:26.897        "compare_and_write": false,
00:15:26.897        "abort": true,
00:15:26.897        "nvme_admin": false,
00:15:26.897        "nvme_io": false
00:15:26.897      },
00:15:26.897      "memory_domains": [
00:15:26.897        {
00:15:26.897          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:26.897          "dma_device_type": 2
00:15:26.897        }
00:15:26.897      ],
00:15:26.897      "driver_specific": {}
00:15:26.897    }
00:15:26.897  ]
00:15:26.897   23:48:57	-- common/autotest_common.sh@905 -- # return 0
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:26.897   23:48:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:26.897    23:48:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:26.897    23:48:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:27.155   23:48:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:27.155    "name": "Existed_Raid",
00:15:27.155    "uuid": "4b3a0298-df8b-4a16-8ad3-312020478b9d",
00:15:27.155    "strip_size_kb": 64,
00:15:27.155    "state": "configuring",
00:15:27.155    "raid_level": "raid0",
00:15:27.155    "superblock": true,
00:15:27.155    "num_base_bdevs": 3,
00:15:27.155    "num_base_bdevs_discovered": 2,
00:15:27.155    "num_base_bdevs_operational": 3,
00:15:27.155    "base_bdevs_list": [
00:15:27.155      {
00:15:27.155        "name": "BaseBdev1",
00:15:27.155        "uuid": "d96a5bbf-4597-49cc-b9f6-1112d23e8ee9",
00:15:27.155        "is_configured": true,
00:15:27.155        "data_offset": 2048,
00:15:27.155        "data_size": 63488
00:15:27.155      },
00:15:27.155      {
00:15:27.155        "name": "BaseBdev2",
00:15:27.155        "uuid": "0e961079-dcac-4c1e-8672-ee89fde70911",
00:15:27.155        "is_configured": true,
00:15:27.155        "data_offset": 2048,
00:15:27.156        "data_size": 63488
00:15:27.156      },
00:15:27.156      {
00:15:27.156        "name": "BaseBdev3",
00:15:27.156        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:27.156        "is_configured": false,
00:15:27.156        "data_offset": 0,
00:15:27.156        "data_size": 0
00:15:27.156      }
00:15:27.156    ]
00:15:27.156  }'
00:15:27.156   23:48:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:27.156   23:48:57	-- common/autotest_common.sh@10 -- # set +x
00:15:27.722   23:48:58	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:27.980  [2024-12-13 23:48:58.674967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:27.980  [2024-12-13 23:48:58.675180] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:15:27.980  [2024-12-13 23:48:58.675194] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:27.980  [2024-12-13 23:48:58.675327] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:15:27.980  [2024-12-13 23:48:58.675680] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:15:27.981  [2024-12-13 23:48:58.675710] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:15:27.981  [2024-12-13 23:48:58.675858] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:27.981  BaseBdev3
00:15:27.981   23:48:58	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:27.981   23:48:58	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:27.981   23:48:58	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:27.981   23:48:58	-- common/autotest_common.sh@899 -- # local i
00:15:27.981   23:48:58	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:27.981   23:48:58	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:27.981   23:48:58	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:28.239   23:48:58	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:28.498  [
00:15:28.498    {
00:15:28.498      "name": "BaseBdev3",
00:15:28.498      "aliases": [
00:15:28.498        "d79a077e-3811-40c7-90b3-750a47b8ff53"
00:15:28.498      ],
00:15:28.498      "product_name": "Malloc disk",
00:15:28.498      "block_size": 512,
00:15:28.498      "num_blocks": 65536,
00:15:28.498      "uuid": "d79a077e-3811-40c7-90b3-750a47b8ff53",
00:15:28.498      "assigned_rate_limits": {
00:15:28.498        "rw_ios_per_sec": 0,
00:15:28.498        "rw_mbytes_per_sec": 0,
00:15:28.498        "r_mbytes_per_sec": 0,
00:15:28.498        "w_mbytes_per_sec": 0
00:15:28.498      },
00:15:28.498      "claimed": true,
00:15:28.498      "claim_type": "exclusive_write",
00:15:28.498      "zoned": false,
00:15:28.498      "supported_io_types": {
00:15:28.498        "read": true,
00:15:28.498        "write": true,
00:15:28.498        "unmap": true,
00:15:28.498        "write_zeroes": true,
00:15:28.498        "flush": true,
00:15:28.498        "reset": true,
00:15:28.498        "compare": false,
00:15:28.498        "compare_and_write": false,
00:15:28.498        "abort": true,
00:15:28.498        "nvme_admin": false,
00:15:28.498        "nvme_io": false
00:15:28.498      },
00:15:28.498      "memory_domains": [
00:15:28.498        {
00:15:28.498          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:28.498          "dma_device_type": 2
00:15:28.498        }
00:15:28.498      ],
00:15:28.498      "driver_specific": {}
00:15:28.498    }
00:15:28.498  ]
00:15:28.498   23:48:59	-- common/autotest_common.sh@905 -- # return 0
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:28.498   23:48:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:28.498    23:48:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:28.498    23:48:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:28.756   23:48:59	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:28.756    "name": "Existed_Raid",
00:15:28.756    "uuid": "4b3a0298-df8b-4a16-8ad3-312020478b9d",
00:15:28.756    "strip_size_kb": 64,
00:15:28.756    "state": "online",
00:15:28.756    "raid_level": "raid0",
00:15:28.756    "superblock": true,
00:15:28.756    "num_base_bdevs": 3,
00:15:28.756    "num_base_bdevs_discovered": 3,
00:15:28.756    "num_base_bdevs_operational": 3,
00:15:28.756    "base_bdevs_list": [
00:15:28.756      {
00:15:28.756        "name": "BaseBdev1",
00:15:28.756        "uuid": "d96a5bbf-4597-49cc-b9f6-1112d23e8ee9",
00:15:28.756        "is_configured": true,
00:15:28.756        "data_offset": 2048,
00:15:28.756        "data_size": 63488
00:15:28.756      },
00:15:28.756      {
00:15:28.756        "name": "BaseBdev2",
00:15:28.756        "uuid": "0e961079-dcac-4c1e-8672-ee89fde70911",
00:15:28.756        "is_configured": true,
00:15:28.756        "data_offset": 2048,
00:15:28.757        "data_size": 63488
00:15:28.757      },
00:15:28.757      {
00:15:28.757        "name": "BaseBdev3",
00:15:28.757        "uuid": "d79a077e-3811-40c7-90b3-750a47b8ff53",
00:15:28.757        "is_configured": true,
00:15:28.757        "data_offset": 2048,
00:15:28.757        "data_size": 63488
00:15:28.757      }
00:15:28.757    ]
00:15:28.757  }'
00:15:28.757   23:48:59	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:28.757   23:48:59	-- common/autotest_common.sh@10 -- # set +x
00:15:29.324   23:48:59	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:29.583  [2024-12-13 23:49:00.162180] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:29.583  [2024-12-13 23:49:00.162209] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:29.583  [2024-12-13 23:49:00.162255] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:29.583   23:49:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:29.583    23:49:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:29.583    23:49:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:29.842   23:49:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:29.842    "name": "Existed_Raid",
00:15:29.842    "uuid": "4b3a0298-df8b-4a16-8ad3-312020478b9d",
00:15:29.842    "strip_size_kb": 64,
00:15:29.842    "state": "offline",
00:15:29.842    "raid_level": "raid0",
00:15:29.842    "superblock": true,
00:15:29.842    "num_base_bdevs": 3,
00:15:29.842    "num_base_bdevs_discovered": 2,
00:15:29.842    "num_base_bdevs_operational": 2,
00:15:29.842    "base_bdevs_list": [
00:15:29.842      {
00:15:29.842        "name": null,
00:15:29.842        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:29.842        "is_configured": false,
00:15:29.842        "data_offset": 2048,
00:15:29.842        "data_size": 63488
00:15:29.842      },
00:15:29.842      {
00:15:29.842        "name": "BaseBdev2",
00:15:29.842        "uuid": "0e961079-dcac-4c1e-8672-ee89fde70911",
00:15:29.842        "is_configured": true,
00:15:29.842        "data_offset": 2048,
00:15:29.842        "data_size": 63488
00:15:29.842      },
00:15:29.842      {
00:15:29.842        "name": "BaseBdev3",
00:15:29.842        "uuid": "d79a077e-3811-40c7-90b3-750a47b8ff53",
00:15:29.842        "is_configured": true,
00:15:29.842        "data_offset": 2048,
00:15:29.842        "data_size": 63488
00:15:29.842      }
00:15:29.842    ]
00:15:29.842  }'
00:15:29.842   23:49:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:29.842   23:49:00	-- common/autotest_common.sh@10 -- # set +x
00:15:30.410   23:49:01	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:30.410   23:49:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:30.410    23:49:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:30.410    23:49:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:30.668   23:49:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:30.668   23:49:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:30.668   23:49:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:30.668  [2024-12-13 23:49:01.386534] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:30.927   23:49:01	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:30.927   23:49:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:30.927    23:49:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:30.927    23:49:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:31.185   23:49:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:31.185   23:49:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:31.185   23:49:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:31.444  [2024-12-13 23:49:01.967744] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:31.444  [2024-12-13 23:49:01.967809] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:15:31.444   23:49:02	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:31.444   23:49:02	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:31.444    23:49:02	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:31.444    23:49:02	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:31.702   23:49:02	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:31.702   23:49:02	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:31.702   23:49:02	-- bdev/bdev_raid.sh@287 -- # killprocess 115210
00:15:31.702   23:49:02	-- common/autotest_common.sh@936 -- # '[' -z 115210 ']'
00:15:31.703   23:49:02	-- common/autotest_common.sh@940 -- # kill -0 115210
00:15:31.703    23:49:02	-- common/autotest_common.sh@941 -- # uname
00:15:31.703   23:49:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:31.703    23:49:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115210
00:15:31.703   23:49:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:31.703   23:49:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:31.703   23:49:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115210'
00:15:31.703  killing process with pid 115210
00:15:31.703   23:49:02	-- common/autotest_common.sh@955 -- # kill 115210
00:15:31.703  [2024-12-13 23:49:02.306205] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:31.703  [2024-12-13 23:49:02.306323] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:31.703   23:49:02	-- common/autotest_common.sh@960 -- # wait 115210
00:15:32.667  ************************************
00:15:32.667  END TEST raid_state_function_test_sb
00:15:32.667  ************************************
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:32.667  
00:15:32.667  real	0m12.600s
00:15:32.667  user	0m22.171s
00:15:32.667  sys	0m1.494s
00:15:32.667   23:49:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:32.667   23:49:03	-- common/autotest_common.sh@10 -- # set +x
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3
00:15:32.667   23:49:03	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:15:32.667   23:49:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:32.667   23:49:03	-- common/autotest_common.sh@10 -- # set +x
00:15:32.667  ************************************
00:15:32.667  START TEST raid_superblock_test
00:15:32.667  ************************************
00:15:32.667   23:49:03	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:15:32.667   23:49:03	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:15:32.926   23:49:03	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:15:32.926   23:49:03	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:15:32.926   23:49:03	-- bdev/bdev_raid.sh@357 -- # raid_pid=115595
00:15:32.926   23:49:03	-- bdev/bdev_raid.sh@358 -- # waitforlisten 115595 /var/tmp/spdk-raid.sock
00:15:32.926   23:49:03	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:15:32.926   23:49:03	-- common/autotest_common.sh@829 -- # '[' -z 115595 ']'
00:15:32.926   23:49:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:32.926   23:49:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:32.926  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:32.926   23:49:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:32.926   23:49:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:32.926   23:49:03	-- common/autotest_common.sh@10 -- # set +x
00:15:32.926  [2024-12-13 23:49:03.464504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:32.926  [2024-12-13 23:49:03.464694] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115595 ]
00:15:32.926  [2024-12-13 23:49:03.633012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:33.185  [2024-12-13 23:49:03.869998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:33.444  [2024-12-13 23:49:04.040589] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:33.702   23:49:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:33.702   23:49:04	-- common/autotest_common.sh@862 -- # return 0
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:33.702   23:49:04	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:15:33.960  malloc1
00:15:33.960   23:49:04	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:34.219  [2024-12-13 23:49:04.851947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:34.219  [2024-12-13 23:49:04.852441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:34.219  [2024-12-13 23:49:04.852597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:15:34.219  [2024-12-13 23:49:04.852754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:34.219  [2024-12-13 23:49:04.855135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:34.219  [2024-12-13 23:49:04.855312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:34.219  pt1
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:34.219   23:49:04	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:15:34.477  malloc2
00:15:34.477   23:49:05	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:34.735  [2024-12-13 23:49:05.260369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:34.735  [2024-12-13 23:49:05.260577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:34.735  [2024-12-13 23:49:05.260765] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:15:34.735  [2024-12-13 23:49:05.260940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:34.735  [2024-12-13 23:49:05.263174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:34.735  [2024-12-13 23:49:05.263336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:34.735  pt2
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:34.735   23:49:05	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:15:34.993  malloc3
00:15:34.993   23:49:05	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:34.993  [2024-12-13 23:49:05.662665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:34.993  [2024-12-13 23:49:05.662843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:34.993  [2024-12-13 23:49:05.663017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:15:34.993  [2024-12-13 23:49:05.663160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:34.993  [2024-12-13 23:49:05.665384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:34.993  [2024-12-13 23:49:05.665525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:34.993  pt3
00:15:34.993   23:49:05	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:34.993   23:49:05	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:34.993   23:49:05	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:15:35.251  [2024-12-13 23:49:05.846734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:35.251  [2024-12-13 23:49:05.848556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:35.251  [2024-12-13 23:49:05.848624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:35.251  [2024-12-13 23:49:05.848789] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780
00:15:35.251  [2024-12-13 23:49:05.848818] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:35.251  [2024-12-13 23:49:05.848936] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:15:35.251  [2024-12-13 23:49:05.849264] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780
00:15:35.251  [2024-12-13 23:49:05.849288] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780
00:15:35.251  [2024-12-13 23:49:05.849416] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:35.251   23:49:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:35.251    23:49:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:35.251    23:49:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:35.509   23:49:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:35.509    "name": "raid_bdev1",
00:15:35.509    "uuid": "c8e6e286-5b8b-44a7-917c-976f5023d20c",
00:15:35.509    "strip_size_kb": 64,
00:15:35.509    "state": "online",
00:15:35.509    "raid_level": "raid0",
00:15:35.509    "superblock": true,
00:15:35.509    "num_base_bdevs": 3,
00:15:35.509    "num_base_bdevs_discovered": 3,
00:15:35.509    "num_base_bdevs_operational": 3,
00:15:35.509    "base_bdevs_list": [
00:15:35.509      {
00:15:35.509        "name": "pt1",
00:15:35.509        "uuid": "ad9f7f1c-6b98-5aad-a5fa-c05ac34f2511",
00:15:35.509        "is_configured": true,
00:15:35.509        "data_offset": 2048,
00:15:35.509        "data_size": 63488
00:15:35.509      },
00:15:35.509      {
00:15:35.509        "name": "pt2",
00:15:35.509        "uuid": "ce447573-6af4-5ba7-983c-5d5a8b46409b",
00:15:35.509        "is_configured": true,
00:15:35.509        "data_offset": 2048,
00:15:35.509        "data_size": 63488
00:15:35.509      },
00:15:35.509      {
00:15:35.509        "name": "pt3",
00:15:35.509        "uuid": "d2279a76-7be7-55cb-9a78-713307ee3046",
00:15:35.509        "is_configured": true,
00:15:35.509        "data_offset": 2048,
00:15:35.509        "data_size": 63488
00:15:35.509      }
00:15:35.509    ]
00:15:35.509  }'
00:15:35.509   23:49:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:35.509   23:49:06	-- common/autotest_common.sh@10 -- # set +x
00:15:36.076    23:49:06	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:36.076    23:49:06	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:15:36.334  [2024-12-13 23:49:06.855089] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:36.334   23:49:06	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c8e6e286-5b8b-44a7-917c-976f5023d20c
00:15:36.334   23:49:06	-- bdev/bdev_raid.sh@380 -- # '[' -z c8e6e286-5b8b-44a7-917c-976f5023d20c ']'
00:15:36.334   23:49:06	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:15:36.593  [2024-12-13 23:49:07.102956] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:36.593  [2024-12-13 23:49:07.103105] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:36.593  [2024-12-13 23:49:07.103281] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:36.593  [2024-12-13 23:49:07.103504] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:36.593  [2024-12-13 23:49:07.103619] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline
00:15:36.593    23:49:07	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:36.593    23:49:07	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:36.852   23:49:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:37.110   23:49:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:37.111   23:49:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:15:37.369    23:49:07	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:15:37.369    23:49:07	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:15:37.628   23:49:08	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:15:37.628   23:49:08	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:37.628   23:49:08	-- common/autotest_common.sh@650 -- # local es=0
00:15:37.628   23:49:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:37.628   23:49:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:37.628   23:49:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:37.628    23:49:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:37.628   23:49:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:37.628    23:49:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:37.628   23:49:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:37.628   23:49:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:37.628   23:49:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:15:37.628   23:49:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:37.628  [2024-12-13 23:49:08.343227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:15:37.628  [2024-12-13 23:49:08.345278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:15:37.628  [2024-12-13 23:49:08.345454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:15:37.628  [2024-12-13 23:49:08.345548] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:15:37.628  [2024-12-13 23:49:08.346015] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:15:37.628  [2024-12-13 23:49:08.346192] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:15:37.628  [2024-12-13 23:49:08.346278] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:37.628  [2024-12-13 23:49:08.346338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring
00:15:37.628  request:
00:15:37.628  {
00:15:37.628    "name": "raid_bdev1",
00:15:37.628    "raid_level": "raid0",
00:15:37.628    "base_bdevs": [
00:15:37.628      "malloc1",
00:15:37.628      "malloc2",
00:15:37.628      "malloc3"
00:15:37.628    ],
00:15:37.628    "superblock": false,
00:15:37.628    "strip_size_kb": 64,
00:15:37.628    "method": "bdev_raid_create",
00:15:37.628    "req_id": 1
00:15:37.628  }
00:15:37.628  Got JSON-RPC error response
00:15:37.628  response:
00:15:37.628  {
00:15:37.628    "code": -17,
00:15:37.628    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:15:37.628  }
00:15:37.628   23:49:08	-- common/autotest_common.sh@653 -- # es=1
00:15:37.628   23:49:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:37.628   23:49:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:37.628   23:49:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:37.628    23:49:08	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:37.628    23:49:08	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:38.196  [2024-12-13 23:49:08.807226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:38.196  [2024-12-13 23:49:08.807430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:38.196  [2024-12-13 23:49:08.807505] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:15:38.196  [2024-12-13 23:49:08.807621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:38.196  [2024-12-13 23:49:08.809849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:38.196  [2024-12-13 23:49:08.810057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:38.196  [2024-12-13 23:49:08.810291] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:15:38.196  [2024-12-13 23:49:08.810449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:38.196  pt1
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:38.196   23:49:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:38.196    23:49:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:38.196    23:49:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:38.455   23:49:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:38.455    "name": "raid_bdev1",
00:15:38.455    "uuid": "c8e6e286-5b8b-44a7-917c-976f5023d20c",
00:15:38.455    "strip_size_kb": 64,
00:15:38.455    "state": "configuring",
00:15:38.455    "raid_level": "raid0",
00:15:38.455    "superblock": true,
00:15:38.455    "num_base_bdevs": 3,
00:15:38.455    "num_base_bdevs_discovered": 1,
00:15:38.455    "num_base_bdevs_operational": 3,
00:15:38.455    "base_bdevs_list": [
00:15:38.455      {
00:15:38.455        "name": "pt1",
00:15:38.455        "uuid": "ad9f7f1c-6b98-5aad-a5fa-c05ac34f2511",
00:15:38.455        "is_configured": true,
00:15:38.455        "data_offset": 2048,
00:15:38.455        "data_size": 63488
00:15:38.455      },
00:15:38.455      {
00:15:38.455        "name": null,
00:15:38.455        "uuid": "ce447573-6af4-5ba7-983c-5d5a8b46409b",
00:15:38.455        "is_configured": false,
00:15:38.455        "data_offset": 2048,
00:15:38.455        "data_size": 63488
00:15:38.455      },
00:15:38.455      {
00:15:38.455        "name": null,
00:15:38.455        "uuid": "d2279a76-7be7-55cb-9a78-713307ee3046",
00:15:38.455        "is_configured": false,
00:15:38.455        "data_offset": 2048,
00:15:38.455        "data_size": 63488
00:15:38.455      }
00:15:38.455    ]
00:15:38.455  }'
00:15:38.455   23:49:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:38.455   23:49:09	-- common/autotest_common.sh@10 -- # set +x
00:15:39.022   23:49:09	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:15:39.022   23:49:09	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:39.281  [2024-12-13 23:49:09.883627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:39.281  [2024-12-13 23:49:09.883834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:39.281  [2024-12-13 23:49:09.883915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:15:39.281  [2024-12-13 23:49:09.884037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:39.281  [2024-12-13 23:49:09.884497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:39.281  [2024-12-13 23:49:09.884658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:39.281  [2024-12-13 23:49:09.884870] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:39.281  [2024-12-13 23:49:09.884999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:39.281  pt2
00:15:39.281   23:49:09	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:39.540  [2024-12-13 23:49:10.123717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:39.540   23:49:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:39.540    23:49:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:39.540    23:49:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:39.799   23:49:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:39.799    "name": "raid_bdev1",
00:15:39.799    "uuid": "c8e6e286-5b8b-44a7-917c-976f5023d20c",
00:15:39.799    "strip_size_kb": 64,
00:15:39.799    "state": "configuring",
00:15:39.799    "raid_level": "raid0",
00:15:39.799    "superblock": true,
00:15:39.799    "num_base_bdevs": 3,
00:15:39.799    "num_base_bdevs_discovered": 1,
00:15:39.799    "num_base_bdevs_operational": 3,
00:15:39.799    "base_bdevs_list": [
00:15:39.799      {
00:15:39.799        "name": "pt1",
00:15:39.799        "uuid": "ad9f7f1c-6b98-5aad-a5fa-c05ac34f2511",
00:15:39.799        "is_configured": true,
00:15:39.799        "data_offset": 2048,
00:15:39.799        "data_size": 63488
00:15:39.799      },
00:15:39.799      {
00:15:39.799        "name": null,
00:15:39.799        "uuid": "ce447573-6af4-5ba7-983c-5d5a8b46409b",
00:15:39.799        "is_configured": false,
00:15:39.799        "data_offset": 2048,
00:15:39.799        "data_size": 63488
00:15:39.799      },
00:15:39.799      {
00:15:39.799        "name": null,
00:15:39.799        "uuid": "d2279a76-7be7-55cb-9a78-713307ee3046",
00:15:39.799        "is_configured": false,
00:15:39.799        "data_offset": 2048,
00:15:39.799        "data_size": 63488
00:15:39.799      }
00:15:39.799    ]
00:15:39.799  }'
00:15:39.799   23:49:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:39.799   23:49:10	-- common/autotest_common.sh@10 -- # set +x
00:15:40.367   23:49:10	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:15:40.367   23:49:10	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:40.367   23:49:10	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:40.625  [2024-12-13 23:49:11.152027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:40.625  [2024-12-13 23:49:11.152231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:40.625  [2024-12-13 23:49:11.152305] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:15:40.625  [2024-12-13 23:49:11.152454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:40.625  [2024-12-13 23:49:11.152923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:40.625  [2024-12-13 23:49:11.153088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:40.625  [2024-12-13 23:49:11.153303] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:40.625  [2024-12-13 23:49:11.153430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:40.625  pt2
00:15:40.625   23:49:11	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:40.625   23:49:11	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:40.625   23:49:11	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:40.884  [2024-12-13 23:49:11.408074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:40.884  [2024-12-13 23:49:11.408266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:40.884  [2024-12-13 23:49:11.408340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:15:40.884  [2024-12-13 23:49:11.408462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:40.884  [2024-12-13 23:49:11.408887] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:40.884  [2024-12-13 23:49:11.409062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:40.884  [2024-12-13 23:49:11.409270] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:15:40.884  [2024-12-13 23:49:11.409395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:40.884  [2024-12-13 23:49:11.409550] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:15:40.884  [2024-12-13 23:49:11.409684] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:40.884  [2024-12-13 23:49:11.409823] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:15:40.884  [2024-12-13 23:49:11.410305] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:15:40.884  [2024-12-13 23:49:11.410448] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:15:40.884  [2024-12-13 23:49:11.410665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:40.884  pt3
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:40.884    23:49:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:40.884    23:49:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:40.884   23:49:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:40.884    "name": "raid_bdev1",
00:15:40.884    "uuid": "c8e6e286-5b8b-44a7-917c-976f5023d20c",
00:15:40.884    "strip_size_kb": 64,
00:15:40.884    "state": "online",
00:15:40.884    "raid_level": "raid0",
00:15:40.884    "superblock": true,
00:15:40.884    "num_base_bdevs": 3,
00:15:40.884    "num_base_bdevs_discovered": 3,
00:15:40.884    "num_base_bdevs_operational": 3,
00:15:40.884    "base_bdevs_list": [
00:15:40.884      {
00:15:40.884        "name": "pt1",
00:15:40.884        "uuid": "ad9f7f1c-6b98-5aad-a5fa-c05ac34f2511",
00:15:40.884        "is_configured": true,
00:15:40.884        "data_offset": 2048,
00:15:40.884        "data_size": 63488
00:15:40.884      },
00:15:40.884      {
00:15:40.884        "name": "pt2",
00:15:40.884        "uuid": "ce447573-6af4-5ba7-983c-5d5a8b46409b",
00:15:40.884        "is_configured": true,
00:15:40.884        "data_offset": 2048,
00:15:40.884        "data_size": 63488
00:15:40.884      },
00:15:40.884      {
00:15:40.884        "name": "pt3",
00:15:40.884        "uuid": "d2279a76-7be7-55cb-9a78-713307ee3046",
00:15:40.884        "is_configured": true,
00:15:40.884        "data_offset": 2048,
00:15:40.884        "data_size": 63488
00:15:40.884      }
00:15:40.884    ]
00:15:40.884  }'
00:15:40.885   23:49:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:40.885   23:49:11	-- common/autotest_common.sh@10 -- # set +x
00:15:41.820    23:49:12	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:41.820    23:49:12	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:15:41.820  [2024-12-13 23:49:12.392428] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:41.820   23:49:12	-- bdev/bdev_raid.sh@430 -- # '[' c8e6e286-5b8b-44a7-917c-976f5023d20c '!=' c8e6e286-5b8b-44a7-917c-976f5023d20c ']'
00:15:41.820   23:49:12	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:15:41.820   23:49:12	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:41.820   23:49:12	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:41.820   23:49:12	-- bdev/bdev_raid.sh@511 -- # killprocess 115595
00:15:41.820   23:49:12	-- common/autotest_common.sh@936 -- # '[' -z 115595 ']'
00:15:41.820   23:49:12	-- common/autotest_common.sh@940 -- # kill -0 115595
00:15:41.820    23:49:12	-- common/autotest_common.sh@941 -- # uname
00:15:41.820   23:49:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:41.820    23:49:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115595
00:15:41.820   23:49:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:41.820   23:49:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:41.820   23:49:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115595'
00:15:41.820  killing process with pid 115595
00:15:41.820   23:49:12	-- common/autotest_common.sh@955 -- # kill 115595
00:15:41.820  [2024-12-13 23:49:12.435250] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:41.820  [2024-12-13 23:49:12.435315] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:41.820  [2024-12-13 23:49:12.435365] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:41.820  [2024-12-13 23:49:12.435417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:15:41.820   23:49:12	-- common/autotest_common.sh@960 -- # wait 115595
00:15:42.079  [2024-12-13 23:49:12.627555] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:43.017  ************************************
00:15:43.017  END TEST raid_superblock_test
00:15:43.017  ************************************
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@513 -- # return 0
00:15:43.017  
00:15:43.017  real	0m10.157s
00:15:43.017  user	0m17.584s
00:15:43.017  sys	0m1.292s
00:15:43.017   23:49:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:43.017   23:49:13	-- common/autotest_common.sh@10 -- # set +x
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false
00:15:43.017   23:49:13	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:43.017   23:49:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:43.017   23:49:13	-- common/autotest_common.sh@10 -- # set +x
00:15:43.017  ************************************
00:15:43.017  START TEST raid_state_function_test
00:15:43.017  ************************************
00:15:43.017   23:49:13	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:43.017    23:49:13	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@226 -- # raid_pid=115900
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115900'
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:43.017  Process raid pid: 115900
00:15:43.017   23:49:13	-- bdev/bdev_raid.sh@228 -- # waitforlisten 115900 /var/tmp/spdk-raid.sock
00:15:43.017   23:49:13	-- common/autotest_common.sh@829 -- # '[' -z 115900 ']'
00:15:43.017   23:49:13	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:43.017   23:49:13	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:43.017   23:49:13	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:43.017  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:43.017   23:49:13	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:43.017   23:49:13	-- common/autotest_common.sh@10 -- # set +x
00:15:43.017  [2024-12-13 23:49:13.684791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:43.017  [2024-12-13 23:49:13.685000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:43.276  [2024-12-13 23:49:13.858047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:43.534  [2024-12-13 23:49:14.087299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:43.534  [2024-12-13 23:49:14.255132] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:44.101   23:49:14	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:44.101   23:49:14	-- common/autotest_common.sh@862 -- # return 0
00:15:44.101   23:49:14	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:44.359  [2024-12-13 23:49:14.861233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:44.359  [2024-12-13 23:49:14.861644] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:44.359  [2024-12-13 23:49:14.861690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:44.359  [2024-12-13 23:49:14.861718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:44.359  [2024-12-13 23:49:14.861727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:44.359  [2024-12-13 23:49:14.861813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:44.359   23:49:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:44.359    23:49:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:44.359    23:49:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:44.616   23:49:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:44.616    "name": "Existed_Raid",
00:15:44.616    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:44.616    "strip_size_kb": 64,
00:15:44.616    "state": "configuring",
00:15:44.616    "raid_level": "concat",
00:15:44.616    "superblock": false,
00:15:44.616    "num_base_bdevs": 3,
00:15:44.616    "num_base_bdevs_discovered": 0,
00:15:44.616    "num_base_bdevs_operational": 3,
00:15:44.616    "base_bdevs_list": [
00:15:44.616      {
00:15:44.616        "name": "BaseBdev1",
00:15:44.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:44.616        "is_configured": false,
00:15:44.616        "data_offset": 0,
00:15:44.616        "data_size": 0
00:15:44.616      },
00:15:44.616      {
00:15:44.616        "name": "BaseBdev2",
00:15:44.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:44.616        "is_configured": false,
00:15:44.616        "data_offset": 0,
00:15:44.616        "data_size": 0
00:15:44.616      },
00:15:44.616      {
00:15:44.616        "name": "BaseBdev3",
00:15:44.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:44.616        "is_configured": false,
00:15:44.616        "data_offset": 0,
00:15:44.616        "data_size": 0
00:15:44.616      }
00:15:44.616    ]
00:15:44.616  }'
00:15:44.616   23:49:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:44.616   23:49:15	-- common/autotest_common.sh@10 -- # set +x
00:15:45.182   23:49:15	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:45.182  [2024-12-13 23:49:15.857316] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:45.182  [2024-12-13 23:49:15.857363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:15:45.182   23:49:15	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:45.441  [2024-12-13 23:49:16.105394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:45.441  [2024-12-13 23:49:16.105856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:45.441  [2024-12-13 23:49:16.105888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:45.441  [2024-12-13 23:49:16.106030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:45.441  [2024-12-13 23:49:16.106048] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:45.441  [2024-12-13 23:49:16.106182] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:45.441   23:49:16	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:45.699  [2024-12-13 23:49:16.330902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:45.699  BaseBdev1
00:15:45.699   23:49:16	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:45.699   23:49:16	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:45.699   23:49:16	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:45.699   23:49:16	-- common/autotest_common.sh@899 -- # local i
00:15:45.699   23:49:16	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:45.699   23:49:16	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:45.699   23:49:16	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:45.958   23:49:16	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:46.217  [
00:15:46.217    {
00:15:46.217      "name": "BaseBdev1",
00:15:46.217      "aliases": [
00:15:46.217        "3d6317e7-7134-4469-bbfe-34371a2ada04"
00:15:46.217      ],
00:15:46.217      "product_name": "Malloc disk",
00:15:46.217      "block_size": 512,
00:15:46.217      "num_blocks": 65536,
00:15:46.217      "uuid": "3d6317e7-7134-4469-bbfe-34371a2ada04",
00:15:46.217      "assigned_rate_limits": {
00:15:46.217        "rw_ios_per_sec": 0,
00:15:46.217        "rw_mbytes_per_sec": 0,
00:15:46.217        "r_mbytes_per_sec": 0,
00:15:46.217        "w_mbytes_per_sec": 0
00:15:46.217      },
00:15:46.217      "claimed": true,
00:15:46.217      "claim_type": "exclusive_write",
00:15:46.217      "zoned": false,
00:15:46.217      "supported_io_types": {
00:15:46.217        "read": true,
00:15:46.217        "write": true,
00:15:46.217        "unmap": true,
00:15:46.217        "write_zeroes": true,
00:15:46.217        "flush": true,
00:15:46.217        "reset": true,
00:15:46.217        "compare": false,
00:15:46.217        "compare_and_write": false,
00:15:46.217        "abort": true,
00:15:46.217        "nvme_admin": false,
00:15:46.217        "nvme_io": false
00:15:46.217      },
00:15:46.217      "memory_domains": [
00:15:46.217        {
00:15:46.217          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:46.217          "dma_device_type": 2
00:15:46.217        }
00:15:46.217      ],
00:15:46.217      "driver_specific": {}
00:15:46.217    }
00:15:46.217  ]
00:15:46.217   23:49:16	-- common/autotest_common.sh@905 -- # return 0
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:46.217   23:49:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:46.217    23:49:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:46.217    23:49:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:46.476   23:49:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:46.476    "name": "Existed_Raid",
00:15:46.476    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:46.476    "strip_size_kb": 64,
00:15:46.476    "state": "configuring",
00:15:46.476    "raid_level": "concat",
00:15:46.476    "superblock": false,
00:15:46.476    "num_base_bdevs": 3,
00:15:46.476    "num_base_bdevs_discovered": 1,
00:15:46.476    "num_base_bdevs_operational": 3,
00:15:46.476    "base_bdevs_list": [
00:15:46.476      {
00:15:46.476        "name": "BaseBdev1",
00:15:46.476        "uuid": "3d6317e7-7134-4469-bbfe-34371a2ada04",
00:15:46.476        "is_configured": true,
00:15:46.476        "data_offset": 0,
00:15:46.476        "data_size": 65536
00:15:46.476      },
00:15:46.476      {
00:15:46.476        "name": "BaseBdev2",
00:15:46.476        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:46.476        "is_configured": false,
00:15:46.476        "data_offset": 0,
00:15:46.476        "data_size": 0
00:15:46.476      },
00:15:46.476      {
00:15:46.476        "name": "BaseBdev3",
00:15:46.476        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:46.476        "is_configured": false,
00:15:46.476        "data_offset": 0,
00:15:46.476        "data_size": 0
00:15:46.476      }
00:15:46.476    ]
00:15:46.476  }'
00:15:46.476   23:49:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:46.476   23:49:17	-- common/autotest_common.sh@10 -- # set +x
00:15:47.043   23:49:17	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:47.301  [2024-12-13 23:49:17.835181] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:47.301  [2024-12-13 23:49:17.835228] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:15:47.301   23:49:17	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:15:47.301   23:49:17	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:47.301  [2024-12-13 23:49:18.027265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:47.301  [2024-12-13 23:49:18.029024] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:47.301  [2024-12-13 23:49:18.029401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:47.301  [2024-12-13 23:49:18.029428] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:47.301  [2024-12-13 23:49:18.029553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:47.560    23:49:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:47.560    23:49:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:47.560    "name": "Existed_Raid",
00:15:47.560    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:47.560    "strip_size_kb": 64,
00:15:47.560    "state": "configuring",
00:15:47.560    "raid_level": "concat",
00:15:47.560    "superblock": false,
00:15:47.560    "num_base_bdevs": 3,
00:15:47.560    "num_base_bdevs_discovered": 1,
00:15:47.560    "num_base_bdevs_operational": 3,
00:15:47.560    "base_bdevs_list": [
00:15:47.560      {
00:15:47.560        "name": "BaseBdev1",
00:15:47.560        "uuid": "3d6317e7-7134-4469-bbfe-34371a2ada04",
00:15:47.560        "is_configured": true,
00:15:47.560        "data_offset": 0,
00:15:47.560        "data_size": 65536
00:15:47.560      },
00:15:47.560      {
00:15:47.560        "name": "BaseBdev2",
00:15:47.560        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:47.560        "is_configured": false,
00:15:47.560        "data_offset": 0,
00:15:47.560        "data_size": 0
00:15:47.560      },
00:15:47.560      {
00:15:47.560        "name": "BaseBdev3",
00:15:47.560        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:47.560        "is_configured": false,
00:15:47.560        "data_offset": 0,
00:15:47.560        "data_size": 0
00:15:47.560      }
00:15:47.560    ]
00:15:47.560  }'
00:15:47.560   23:49:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:47.560   23:49:18	-- common/autotest_common.sh@10 -- # set +x
00:15:48.128   23:49:18	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:48.387  [2024-12-13 23:49:19.110148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:48.387  BaseBdev2
00:15:48.646   23:49:19	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:48.646   23:49:19	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:48.646   23:49:19	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:48.646   23:49:19	-- common/autotest_common.sh@899 -- # local i
00:15:48.646   23:49:19	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:48.646   23:49:19	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:48.646   23:49:19	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:48.646   23:49:19	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:48.905  [
00:15:48.905    {
00:15:48.905      "name": "BaseBdev2",
00:15:48.905      "aliases": [
00:15:48.905        "18921dc5-54c1-4f0a-ba5e-2e8b05a338bc"
00:15:48.905      ],
00:15:48.905      "product_name": "Malloc disk",
00:15:48.905      "block_size": 512,
00:15:48.905      "num_blocks": 65536,
00:15:48.905      "uuid": "18921dc5-54c1-4f0a-ba5e-2e8b05a338bc",
00:15:48.905      "assigned_rate_limits": {
00:15:48.905        "rw_ios_per_sec": 0,
00:15:48.905        "rw_mbytes_per_sec": 0,
00:15:48.905        "r_mbytes_per_sec": 0,
00:15:48.905        "w_mbytes_per_sec": 0
00:15:48.905      },
00:15:48.905      "claimed": true,
00:15:48.905      "claim_type": "exclusive_write",
00:15:48.905      "zoned": false,
00:15:48.905      "supported_io_types": {
00:15:48.905        "read": true,
00:15:48.905        "write": true,
00:15:48.905        "unmap": true,
00:15:48.905        "write_zeroes": true,
00:15:48.905        "flush": true,
00:15:48.905        "reset": true,
00:15:48.905        "compare": false,
00:15:48.905        "compare_and_write": false,
00:15:48.905        "abort": true,
00:15:48.905        "nvme_admin": false,
00:15:48.905        "nvme_io": false
00:15:48.905      },
00:15:48.905      "memory_domains": [
00:15:48.905        {
00:15:48.905          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:48.905          "dma_device_type": 2
00:15:48.905        }
00:15:48.905      ],
00:15:48.905      "driver_specific": {}
00:15:48.905    }
00:15:48.905  ]
00:15:48.905   23:49:19	-- common/autotest_common.sh@905 -- # return 0
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:48.905   23:49:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:48.905    23:49:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:48.905    23:49:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:49.164   23:49:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:49.164    "name": "Existed_Raid",
00:15:49.164    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:49.164    "strip_size_kb": 64,
00:15:49.164    "state": "configuring",
00:15:49.164    "raid_level": "concat",
00:15:49.164    "superblock": false,
00:15:49.164    "num_base_bdevs": 3,
00:15:49.164    "num_base_bdevs_discovered": 2,
00:15:49.164    "num_base_bdevs_operational": 3,
00:15:49.164    "base_bdevs_list": [
00:15:49.164      {
00:15:49.164        "name": "BaseBdev1",
00:15:49.164        "uuid": "3d6317e7-7134-4469-bbfe-34371a2ada04",
00:15:49.164        "is_configured": true,
00:15:49.164        "data_offset": 0,
00:15:49.164        "data_size": 65536
00:15:49.164      },
00:15:49.164      {
00:15:49.164        "name": "BaseBdev2",
00:15:49.164        "uuid": "18921dc5-54c1-4f0a-ba5e-2e8b05a338bc",
00:15:49.164        "is_configured": true,
00:15:49.164        "data_offset": 0,
00:15:49.164        "data_size": 65536
00:15:49.164      },
00:15:49.164      {
00:15:49.164        "name": "BaseBdev3",
00:15:49.164        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:49.164        "is_configured": false,
00:15:49.164        "data_offset": 0,
00:15:49.164        "data_size": 0
00:15:49.164      }
00:15:49.164    ]
00:15:49.164  }'
00:15:49.164   23:49:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:49.164   23:49:19	-- common/autotest_common.sh@10 -- # set +x
00:15:49.732   23:49:20	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:49.991  [2024-12-13 23:49:20.554677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:49.991  [2024-12-13 23:49:20.554721] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:15:49.991  [2024-12-13 23:49:20.554731] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:15:49.991  [2024-12-13 23:49:20.554826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:15:49.991  [2024-12-13 23:49:20.555187] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:15:49.991  [2024-12-13 23:49:20.555211] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:15:49.991  [2024-12-13 23:49:20.555490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:49.991  BaseBdev3
00:15:49.991   23:49:20	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:49.991   23:49:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:49.991   23:49:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:49.991   23:49:20	-- common/autotest_common.sh@899 -- # local i
00:15:49.991   23:49:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:49.991   23:49:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:49.991   23:49:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:50.250   23:49:20	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:50.250  [
00:15:50.250    {
00:15:50.250      "name": "BaseBdev3",
00:15:50.250      "aliases": [
00:15:50.250        "908cd574-f86c-41ef-831b-ae81f29fbac6"
00:15:50.250      ],
00:15:50.250      "product_name": "Malloc disk",
00:15:50.250      "block_size": 512,
00:15:50.250      "num_blocks": 65536,
00:15:50.250      "uuid": "908cd574-f86c-41ef-831b-ae81f29fbac6",
00:15:50.250      "assigned_rate_limits": {
00:15:50.250        "rw_ios_per_sec": 0,
00:15:50.250        "rw_mbytes_per_sec": 0,
00:15:50.250        "r_mbytes_per_sec": 0,
00:15:50.250        "w_mbytes_per_sec": 0
00:15:50.250      },
00:15:50.250      "claimed": true,
00:15:50.250      "claim_type": "exclusive_write",
00:15:50.250      "zoned": false,
00:15:50.250      "supported_io_types": {
00:15:50.250        "read": true,
00:15:50.250        "write": true,
00:15:50.250        "unmap": true,
00:15:50.250        "write_zeroes": true,
00:15:50.250        "flush": true,
00:15:50.250        "reset": true,
00:15:50.250        "compare": false,
00:15:50.250        "compare_and_write": false,
00:15:50.250        "abort": true,
00:15:50.250        "nvme_admin": false,
00:15:50.250        "nvme_io": false
00:15:50.250      },
00:15:50.250      "memory_domains": [
00:15:50.250        {
00:15:50.250          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:50.250          "dma_device_type": 2
00:15:50.250        }
00:15:50.250      ],
00:15:50.250      "driver_specific": {}
00:15:50.250    }
00:15:50.250  ]
00:15:50.508   23:49:20	-- common/autotest_common.sh@905 -- # return 0
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:50.508   23:49:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:50.509   23:49:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:50.509   23:49:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:50.509   23:49:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:50.509   23:49:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:50.509    23:49:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:50.509    23:49:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:50.509   23:49:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:50.509    "name": "Existed_Raid",
00:15:50.509    "uuid": "d1578719-5101-4d62-9be1-f3e3862e9e7d",
00:15:50.509    "strip_size_kb": 64,
00:15:50.509    "state": "online",
00:15:50.509    "raid_level": "concat",
00:15:50.509    "superblock": false,
00:15:50.509    "num_base_bdevs": 3,
00:15:50.509    "num_base_bdevs_discovered": 3,
00:15:50.509    "num_base_bdevs_operational": 3,
00:15:50.509    "base_bdevs_list": [
00:15:50.509      {
00:15:50.509        "name": "BaseBdev1",
00:15:50.509        "uuid": "3d6317e7-7134-4469-bbfe-34371a2ada04",
00:15:50.509        "is_configured": true,
00:15:50.509        "data_offset": 0,
00:15:50.509        "data_size": 65536
00:15:50.509      },
00:15:50.509      {
00:15:50.509        "name": "BaseBdev2",
00:15:50.509        "uuid": "18921dc5-54c1-4f0a-ba5e-2e8b05a338bc",
00:15:50.509        "is_configured": true,
00:15:50.509        "data_offset": 0,
00:15:50.509        "data_size": 65536
00:15:50.509      },
00:15:50.509      {
00:15:50.509        "name": "BaseBdev3",
00:15:50.509        "uuid": "908cd574-f86c-41ef-831b-ae81f29fbac6",
00:15:50.509        "is_configured": true,
00:15:50.509        "data_offset": 0,
00:15:50.509        "data_size": 65536
00:15:50.509      }
00:15:50.509    ]
00:15:50.509  }'
00:15:50.509   23:49:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:50.509   23:49:21	-- common/autotest_common.sh@10 -- # set +x
00:15:51.460   23:49:21	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:51.460  [2024-12-13 23:49:22.077940] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:51.460  [2024-12-13 23:49:22.077979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:51.460  [2024-12-13 23:49:22.078036] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:51.460   23:49:22	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:51.460   23:49:22	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:15:51.460   23:49:22	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:51.460   23:49:22	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:51.461   23:49:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:51.461    23:49:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:51.461    23:49:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:51.732   23:49:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:51.732    "name": "Existed_Raid",
00:15:51.732    "uuid": "d1578719-5101-4d62-9be1-f3e3862e9e7d",
00:15:51.732    "strip_size_kb": 64,
00:15:51.732    "state": "offline",
00:15:51.732    "raid_level": "concat",
00:15:51.732    "superblock": false,
00:15:51.732    "num_base_bdevs": 3,
00:15:51.732    "num_base_bdevs_discovered": 2,
00:15:51.732    "num_base_bdevs_operational": 2,
00:15:51.732    "base_bdevs_list": [
00:15:51.732      {
00:15:51.732        "name": null,
00:15:51.732        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:51.732        "is_configured": false,
00:15:51.732        "data_offset": 0,
00:15:51.732        "data_size": 65536
00:15:51.732      },
00:15:51.732      {
00:15:51.732        "name": "BaseBdev2",
00:15:51.732        "uuid": "18921dc5-54c1-4f0a-ba5e-2e8b05a338bc",
00:15:51.732        "is_configured": true,
00:15:51.732        "data_offset": 0,
00:15:51.732        "data_size": 65536
00:15:51.732      },
00:15:51.732      {
00:15:51.732        "name": "BaseBdev3",
00:15:51.732        "uuid": "908cd574-f86c-41ef-831b-ae81f29fbac6",
00:15:51.732        "is_configured": true,
00:15:51.732        "data_offset": 0,
00:15:51.732        "data_size": 65536
00:15:51.732      }
00:15:51.732    ]
00:15:51.732  }'
00:15:51.732   23:49:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:51.732   23:49:22	-- common/autotest_common.sh@10 -- # set +x
00:15:52.318   23:49:23	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:52.318   23:49:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:52.318    23:49:23	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:52.318    23:49:23	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:52.886   23:49:23	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:52.886   23:49:23	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:52.886   23:49:23	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:52.886  [2024-12-13 23:49:23.571000] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:53.145   23:49:23	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:53.145   23:49:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:53.145    23:49:23	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:53.145    23:49:23	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:53.145   23:49:23	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:53.145   23:49:23	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:53.145   23:49:23	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:53.404  [2024-12-13 23:49:24.064172] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:53.404  [2024-12-13 23:49:24.064226] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:15:53.663   23:49:24	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:53.663   23:49:24	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:53.663    23:49:24	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:53.663    23:49:24	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:53.663   23:49:24	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:53.663   23:49:24	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:53.663   23:49:24	-- bdev/bdev_raid.sh@287 -- # killprocess 115900
00:15:53.663   23:49:24	-- common/autotest_common.sh@936 -- # '[' -z 115900 ']'
00:15:53.663   23:49:24	-- common/autotest_common.sh@940 -- # kill -0 115900
00:15:53.663    23:49:24	-- common/autotest_common.sh@941 -- # uname
00:15:53.663   23:49:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:53.663    23:49:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115900
00:15:53.663   23:49:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:53.663   23:49:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:53.663   23:49:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115900'
00:15:53.663  killing process with pid 115900
00:15:53.663   23:49:24	-- common/autotest_common.sh@955 -- # kill 115900
00:15:53.663  [2024-12-13 23:49:24.375455] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:53.663   23:49:24	-- common/autotest_common.sh@960 -- # wait 115900
00:15:53.663  [2024-12-13 23:49:24.375571] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:54.600   23:49:25	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:54.600  
00:15:54.600  real	0m11.685s
00:15:54.600  user	0m20.712s
00:15:54.600  sys	0m1.419s
00:15:54.600   23:49:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:54.600   23:49:25	-- common/autotest_common.sh@10 -- # set +x
00:15:54.600  ************************************
00:15:54.600  END TEST raid_state_function_test
00:15:54.600  ************************************
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true
00:15:54.858   23:49:25	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:54.858   23:49:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:54.858   23:49:25	-- common/autotest_common.sh@10 -- # set +x
00:15:54.858  ************************************
00:15:54.858  START TEST raid_state_function_test_sb
00:15:54.858  ************************************
00:15:54.858   23:49:25	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:54.858    23:49:25	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@226 -- # raid_pid=116285
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116285'
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:54.858  Process raid pid: 116285
00:15:54.858   23:49:25	-- bdev/bdev_raid.sh@228 -- # waitforlisten 116285 /var/tmp/spdk-raid.sock
00:15:54.858   23:49:25	-- common/autotest_common.sh@829 -- # '[' -z 116285 ']'
00:15:54.858   23:49:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:54.858   23:49:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:54.858   23:49:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:54.858  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:54.858   23:49:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:54.858   23:49:25	-- common/autotest_common.sh@10 -- # set +x
00:15:54.858  [2024-12-13 23:49:25.435470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:54.858  [2024-12-13 23:49:25.435669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:55.117  [2024-12-13 23:49:25.605379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:55.117  [2024-12-13 23:49:25.769610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:55.376  [2024-12-13 23:49:25.939879] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:55.635   23:49:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:55.635   23:49:26	-- common/autotest_common.sh@862 -- # return 0
00:15:55.635   23:49:26	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:55.894  [2024-12-13 23:49:26.576346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:55.894  [2024-12-13 23:49:26.576769] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:55.894  [2024-12-13 23:49:26.576796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:55.894  [2024-12-13 23:49:26.576919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:55.894  [2024-12-13 23:49:26.576936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:55.894  [2024-12-13 23:49:26.577107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:55.894   23:49:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:55.894    23:49:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:55.894    23:49:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:56.153   23:49:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:56.153    "name": "Existed_Raid",
00:15:56.153    "uuid": "af11aa44-0d9b-4298-ba90-1e0cd01dd6af",
00:15:56.153    "strip_size_kb": 64,
00:15:56.153    "state": "configuring",
00:15:56.153    "raid_level": "concat",
00:15:56.153    "superblock": true,
00:15:56.153    "num_base_bdevs": 3,
00:15:56.153    "num_base_bdevs_discovered": 0,
00:15:56.153    "num_base_bdevs_operational": 3,
00:15:56.153    "base_bdevs_list": [
00:15:56.153      {
00:15:56.153        "name": "BaseBdev1",
00:15:56.153        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:56.153        "is_configured": false,
00:15:56.153        "data_offset": 0,
00:15:56.153        "data_size": 0
00:15:56.153      },
00:15:56.153      {
00:15:56.153        "name": "BaseBdev2",
00:15:56.153        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:56.153        "is_configured": false,
00:15:56.153        "data_offset": 0,
00:15:56.153        "data_size": 0
00:15:56.153      },
00:15:56.153      {
00:15:56.153        "name": "BaseBdev3",
00:15:56.153        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:56.153        "is_configured": false,
00:15:56.153        "data_offset": 0,
00:15:56.153        "data_size": 0
00:15:56.153      }
00:15:56.153    ]
00:15:56.153  }'
00:15:56.153   23:49:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:56.153   23:49:26	-- common/autotest_common.sh@10 -- # set +x
00:15:56.720   23:49:27	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:56.979  [2024-12-13 23:49:27.568361] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:56.979  [2024-12-13 23:49:27.568394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:15:56.979   23:49:27	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:57.238  [2024-12-13 23:49:27.816440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:57.238  [2024-12-13 23:49:27.816768] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:57.238  [2024-12-13 23:49:27.816790] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:57.238  [2024-12-13 23:49:27.816916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:57.238  [2024-12-13 23:49:27.816932] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:57.238  [2024-12-13 23:49:27.817052] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:57.238   23:49:27	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:57.497  [2024-12-13 23:49:28.030901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:57.497  BaseBdev1
00:15:57.497   23:49:28	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:57.497   23:49:28	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:57.497   23:49:28	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:57.497   23:49:28	-- common/autotest_common.sh@899 -- # local i
00:15:57.497   23:49:28	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:57.497   23:49:28	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:57.498   23:49:28	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:57.498   23:49:28	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:57.756  [
00:15:57.756    {
00:15:57.756      "name": "BaseBdev1",
00:15:57.756      "aliases": [
00:15:57.756        "fa801e95-914e-4c51-a6b3-7ea49eb59826"
00:15:57.756      ],
00:15:57.756      "product_name": "Malloc disk",
00:15:57.756      "block_size": 512,
00:15:57.756      "num_blocks": 65536,
00:15:57.756      "uuid": "fa801e95-914e-4c51-a6b3-7ea49eb59826",
00:15:57.756      "assigned_rate_limits": {
00:15:57.756        "rw_ios_per_sec": 0,
00:15:57.756        "rw_mbytes_per_sec": 0,
00:15:57.756        "r_mbytes_per_sec": 0,
00:15:57.756        "w_mbytes_per_sec": 0
00:15:57.756      },
00:15:57.756      "claimed": true,
00:15:57.756      "claim_type": "exclusive_write",
00:15:57.756      "zoned": false,
00:15:57.756      "supported_io_types": {
00:15:57.756        "read": true,
00:15:57.756        "write": true,
00:15:57.756        "unmap": true,
00:15:57.756        "write_zeroes": true,
00:15:57.756        "flush": true,
00:15:57.756        "reset": true,
00:15:57.756        "compare": false,
00:15:57.756        "compare_and_write": false,
00:15:57.756        "abort": true,
00:15:57.756        "nvme_admin": false,
00:15:57.756        "nvme_io": false
00:15:57.756      },
00:15:57.756      "memory_domains": [
00:15:57.756        {
00:15:57.756          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:57.756          "dma_device_type": 2
00:15:57.756        }
00:15:57.756      ],
00:15:57.756      "driver_specific": {}
00:15:57.756    }
00:15:57.756  ]
00:15:57.756   23:49:28	-- common/autotest_common.sh@905 -- # return 0
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:57.756   23:49:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:57.756    23:49:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:57.756    23:49:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:58.015   23:49:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:58.015    "name": "Existed_Raid",
00:15:58.015    "uuid": "83c1f977-a96b-46d9-a267-6e402af8d61f",
00:15:58.015    "strip_size_kb": 64,
00:15:58.015    "state": "configuring",
00:15:58.015    "raid_level": "concat",
00:15:58.015    "superblock": true,
00:15:58.015    "num_base_bdevs": 3,
00:15:58.015    "num_base_bdevs_discovered": 1,
00:15:58.015    "num_base_bdevs_operational": 3,
00:15:58.015    "base_bdevs_list": [
00:15:58.015      {
00:15:58.015        "name": "BaseBdev1",
00:15:58.015        "uuid": "fa801e95-914e-4c51-a6b3-7ea49eb59826",
00:15:58.015        "is_configured": true,
00:15:58.015        "data_offset": 2048,
00:15:58.015        "data_size": 63488
00:15:58.015      },
00:15:58.015      {
00:15:58.015        "name": "BaseBdev2",
00:15:58.015        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.015        "is_configured": false,
00:15:58.015        "data_offset": 0,
00:15:58.015        "data_size": 0
00:15:58.015      },
00:15:58.015      {
00:15:58.015        "name": "BaseBdev3",
00:15:58.015        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.015        "is_configured": false,
00:15:58.015        "data_offset": 0,
00:15:58.015        "data_size": 0
00:15:58.015      }
00:15:58.015    ]
00:15:58.015  }'
00:15:58.015   23:49:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:58.015   23:49:28	-- common/autotest_common.sh@10 -- # set +x
00:15:58.583   23:49:29	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:58.842  [2024-12-13 23:49:29.331134] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:58.842  [2024-12-13 23:49:29.331176] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:15:58.842   23:49:29	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:15:58.842   23:49:29	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:59.100   23:49:29	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:59.100  BaseBdev1
00:15:59.100   23:49:29	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:15:59.100   23:49:29	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:59.100   23:49:29	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:59.100   23:49:29	-- common/autotest_common.sh@899 -- # local i
00:15:59.100   23:49:29	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:59.100   23:49:29	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:59.100   23:49:29	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:59.359   23:49:29	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:59.617  [
00:15:59.617    {
00:15:59.617      "name": "BaseBdev1",
00:15:59.617      "aliases": [
00:15:59.617        "d5e721d3-54b0-423c-8109-21b7f935606a"
00:15:59.617      ],
00:15:59.617      "product_name": "Malloc disk",
00:15:59.617      "block_size": 512,
00:15:59.617      "num_blocks": 65536,
00:15:59.617      "uuid": "d5e721d3-54b0-423c-8109-21b7f935606a",
00:15:59.617      "assigned_rate_limits": {
00:15:59.617        "rw_ios_per_sec": 0,
00:15:59.617        "rw_mbytes_per_sec": 0,
00:15:59.617        "r_mbytes_per_sec": 0,
00:15:59.617        "w_mbytes_per_sec": 0
00:15:59.617      },
00:15:59.617      "claimed": false,
00:15:59.617      "zoned": false,
00:15:59.617      "supported_io_types": {
00:15:59.617        "read": true,
00:15:59.617        "write": true,
00:15:59.617        "unmap": true,
00:15:59.617        "write_zeroes": true,
00:15:59.617        "flush": true,
00:15:59.617        "reset": true,
00:15:59.617        "compare": false,
00:15:59.617        "compare_and_write": false,
00:15:59.617        "abort": true,
00:15:59.617        "nvme_admin": false,
00:15:59.617        "nvme_io": false
00:15:59.617      },
00:15:59.617      "memory_domains": [
00:15:59.617        {
00:15:59.617          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:59.617          "dma_device_type": 2
00:15:59.617        }
00:15:59.617      ],
00:15:59.617      "driver_specific": {}
00:15:59.617    }
00:15:59.617  ]
00:15:59.617   23:49:30	-- common/autotest_common.sh@905 -- # return 0
00:15:59.617   23:49:30	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:59.876  [2024-12-13 23:49:30.385354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:59.876  [2024-12-13 23:49:30.387245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:59.876  [2024-12-13 23:49:30.387629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:59.876  [2024-12-13 23:49:30.387663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:59.876  [2024-12-13 23:49:30.387805] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:59.876    23:49:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:59.876    23:49:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:59.876    "name": "Existed_Raid",
00:15:59.876    "uuid": "0959eb95-1997-4dec-b24d-184c5f322b21",
00:15:59.876    "strip_size_kb": 64,
00:15:59.876    "state": "configuring",
00:15:59.876    "raid_level": "concat",
00:15:59.876    "superblock": true,
00:15:59.876    "num_base_bdevs": 3,
00:15:59.876    "num_base_bdevs_discovered": 1,
00:15:59.876    "num_base_bdevs_operational": 3,
00:15:59.876    "base_bdevs_list": [
00:15:59.876      {
00:15:59.876        "name": "BaseBdev1",
00:15:59.876        "uuid": "d5e721d3-54b0-423c-8109-21b7f935606a",
00:15:59.876        "is_configured": true,
00:15:59.876        "data_offset": 2048,
00:15:59.876        "data_size": 63488
00:15:59.876      },
00:15:59.876      {
00:15:59.876        "name": "BaseBdev2",
00:15:59.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:59.876        "is_configured": false,
00:15:59.876        "data_offset": 0,
00:15:59.876        "data_size": 0
00:15:59.876      },
00:15:59.876      {
00:15:59.876        "name": "BaseBdev3",
00:15:59.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:59.876        "is_configured": false,
00:15:59.876        "data_offset": 0,
00:15:59.876        "data_size": 0
00:15:59.876      }
00:15:59.876    ]
00:15:59.876  }'
00:15:59.876   23:49:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:59.876   23:49:30	-- common/autotest_common.sh@10 -- # set +x
00:16:00.818   23:49:31	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:00.818  [2024-12-13 23:49:31.424097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:00.818  BaseBdev2
00:16:00.818   23:49:31	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:00.818   23:49:31	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:00.818   23:49:31	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:00.818   23:49:31	-- common/autotest_common.sh@899 -- # local i
00:16:00.818   23:49:31	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:00.818   23:49:31	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:00.818   23:49:31	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:01.076   23:49:31	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:01.335  [
00:16:01.335    {
00:16:01.335      "name": "BaseBdev2",
00:16:01.335      "aliases": [
00:16:01.335        "cae06576-1e9b-489e-9c92-e8ac09866490"
00:16:01.335      ],
00:16:01.335      "product_name": "Malloc disk",
00:16:01.335      "block_size": 512,
00:16:01.335      "num_blocks": 65536,
00:16:01.335      "uuid": "cae06576-1e9b-489e-9c92-e8ac09866490",
00:16:01.335      "assigned_rate_limits": {
00:16:01.335        "rw_ios_per_sec": 0,
00:16:01.335        "rw_mbytes_per_sec": 0,
00:16:01.335        "r_mbytes_per_sec": 0,
00:16:01.335        "w_mbytes_per_sec": 0
00:16:01.335      },
00:16:01.335      "claimed": true,
00:16:01.335      "claim_type": "exclusive_write",
00:16:01.335      "zoned": false,
00:16:01.335      "supported_io_types": {
00:16:01.335        "read": true,
00:16:01.335        "write": true,
00:16:01.335        "unmap": true,
00:16:01.335        "write_zeroes": true,
00:16:01.335        "flush": true,
00:16:01.335        "reset": true,
00:16:01.335        "compare": false,
00:16:01.335        "compare_and_write": false,
00:16:01.335        "abort": true,
00:16:01.335        "nvme_admin": false,
00:16:01.335        "nvme_io": false
00:16:01.335      },
00:16:01.335      "memory_domains": [
00:16:01.335        {
00:16:01.335          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:01.335          "dma_device_type": 2
00:16:01.335        }
00:16:01.335      ],
00:16:01.335      "driver_specific": {}
00:16:01.335    }
00:16:01.335  ]
00:16:01.335   23:49:31	-- common/autotest_common.sh@905 -- # return 0
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:01.335   23:49:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:01.335    23:49:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:01.335    23:49:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:01.594   23:49:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:01.594    "name": "Existed_Raid",
00:16:01.594    "uuid": "0959eb95-1997-4dec-b24d-184c5f322b21",
00:16:01.594    "strip_size_kb": 64,
00:16:01.594    "state": "configuring",
00:16:01.594    "raid_level": "concat",
00:16:01.594    "superblock": true,
00:16:01.594    "num_base_bdevs": 3,
00:16:01.594    "num_base_bdevs_discovered": 2,
00:16:01.594    "num_base_bdevs_operational": 3,
00:16:01.594    "base_bdevs_list": [
00:16:01.594      {
00:16:01.594        "name": "BaseBdev1",
00:16:01.594        "uuid": "d5e721d3-54b0-423c-8109-21b7f935606a",
00:16:01.594        "is_configured": true,
00:16:01.594        "data_offset": 2048,
00:16:01.594        "data_size": 63488
00:16:01.594      },
00:16:01.594      {
00:16:01.594        "name": "BaseBdev2",
00:16:01.594        "uuid": "cae06576-1e9b-489e-9c92-e8ac09866490",
00:16:01.594        "is_configured": true,
00:16:01.594        "data_offset": 2048,
00:16:01.594        "data_size": 63488
00:16:01.594      },
00:16:01.594      {
00:16:01.594        "name": "BaseBdev3",
00:16:01.594        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:01.594        "is_configured": false,
00:16:01.594        "data_offset": 0,
00:16:01.594        "data_size": 0
00:16:01.594      }
00:16:01.594    ]
00:16:01.594  }'
00:16:01.594   23:49:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:01.594   23:49:32	-- common/autotest_common.sh@10 -- # set +x
00:16:02.161   23:49:32	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:02.420  [2024-12-13 23:49:33.016445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:02.420  [2024-12-13 23:49:33.016644] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:16:02.420  [2024-12-13 23:49:33.016660] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:02.420  [2024-12-13 23:49:33.016785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:16:02.420  BaseBdev3
00:16:02.420  [2024-12-13 23:49:33.017127] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:16:02.420  [2024-12-13 23:49:33.017156] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:16:02.420  [2024-12-13 23:49:33.017291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:02.420   23:49:33	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:02.420   23:49:33	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:02.420   23:49:33	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:02.420   23:49:33	-- common/autotest_common.sh@899 -- # local i
00:16:02.420   23:49:33	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:02.420   23:49:33	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:02.420   23:49:33	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:02.678   23:49:33	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:02.937  [
00:16:02.937    {
00:16:02.937      "name": "BaseBdev3",
00:16:02.937      "aliases": [
00:16:02.937        "e15a01b2-02b3-4ef3-a684-fd71f7a5c4e9"
00:16:02.937      ],
00:16:02.937      "product_name": "Malloc disk",
00:16:02.937      "block_size": 512,
00:16:02.937      "num_blocks": 65536,
00:16:02.937      "uuid": "e15a01b2-02b3-4ef3-a684-fd71f7a5c4e9",
00:16:02.937      "assigned_rate_limits": {
00:16:02.937        "rw_ios_per_sec": 0,
00:16:02.937        "rw_mbytes_per_sec": 0,
00:16:02.937        "r_mbytes_per_sec": 0,
00:16:02.937        "w_mbytes_per_sec": 0
00:16:02.937      },
00:16:02.937      "claimed": true,
00:16:02.937      "claim_type": "exclusive_write",
00:16:02.937      "zoned": false,
00:16:02.937      "supported_io_types": {
00:16:02.937        "read": true,
00:16:02.937        "write": true,
00:16:02.937        "unmap": true,
00:16:02.937        "write_zeroes": true,
00:16:02.937        "flush": true,
00:16:02.937        "reset": true,
00:16:02.937        "compare": false,
00:16:02.937        "compare_and_write": false,
00:16:02.937        "abort": true,
00:16:02.937        "nvme_admin": false,
00:16:02.937        "nvme_io": false
00:16:02.937      },
00:16:02.937      "memory_domains": [
00:16:02.937        {
00:16:02.937          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:02.937          "dma_device_type": 2
00:16:02.937        }
00:16:02.937      ],
00:16:02.937      "driver_specific": {}
00:16:02.937    }
00:16:02.937  ]
00:16:02.937   23:49:33	-- common/autotest_common.sh@905 -- # return 0
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:02.937   23:49:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:02.937    23:49:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:02.937    23:49:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:03.196   23:49:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:03.196    "name": "Existed_Raid",
00:16:03.196    "uuid": "0959eb95-1997-4dec-b24d-184c5f322b21",
00:16:03.196    "strip_size_kb": 64,
00:16:03.196    "state": "online",
00:16:03.196    "raid_level": "concat",
00:16:03.196    "superblock": true,
00:16:03.196    "num_base_bdevs": 3,
00:16:03.196    "num_base_bdevs_discovered": 3,
00:16:03.196    "num_base_bdevs_operational": 3,
00:16:03.196    "base_bdevs_list": [
00:16:03.196      {
00:16:03.196        "name": "BaseBdev1",
00:16:03.196        "uuid": "d5e721d3-54b0-423c-8109-21b7f935606a",
00:16:03.196        "is_configured": true,
00:16:03.196        "data_offset": 2048,
00:16:03.196        "data_size": 63488
00:16:03.196      },
00:16:03.196      {
00:16:03.196        "name": "BaseBdev2",
00:16:03.196        "uuid": "cae06576-1e9b-489e-9c92-e8ac09866490",
00:16:03.196        "is_configured": true,
00:16:03.196        "data_offset": 2048,
00:16:03.196        "data_size": 63488
00:16:03.196      },
00:16:03.196      {
00:16:03.196        "name": "BaseBdev3",
00:16:03.196        "uuid": "e15a01b2-02b3-4ef3-a684-fd71f7a5c4e9",
00:16:03.196        "is_configured": true,
00:16:03.196        "data_offset": 2048,
00:16:03.196        "data_size": 63488
00:16:03.196      }
00:16:03.196    ]
00:16:03.196  }'
00:16:03.196   23:49:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:03.196   23:49:33	-- common/autotest_common.sh@10 -- # set +x
00:16:03.762   23:49:34	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:04.021  [2024-12-13 23:49:34.497761] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:04.021  [2024-12-13 23:49:34.497800] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:04.021  [2024-12-13 23:49:34.497858] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@197 -- # return 1
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:04.021   23:49:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:04.021    23:49:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:04.021    23:49:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:04.279   23:49:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:04.279    "name": "Existed_Raid",
00:16:04.279    "uuid": "0959eb95-1997-4dec-b24d-184c5f322b21",
00:16:04.279    "strip_size_kb": 64,
00:16:04.279    "state": "offline",
00:16:04.279    "raid_level": "concat",
00:16:04.279    "superblock": true,
00:16:04.279    "num_base_bdevs": 3,
00:16:04.279    "num_base_bdevs_discovered": 2,
00:16:04.279    "num_base_bdevs_operational": 2,
00:16:04.279    "base_bdevs_list": [
00:16:04.279      {
00:16:04.279        "name": null,
00:16:04.279        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.279        "is_configured": false,
00:16:04.279        "data_offset": 2048,
00:16:04.279        "data_size": 63488
00:16:04.279      },
00:16:04.279      {
00:16:04.279        "name": "BaseBdev2",
00:16:04.279        "uuid": "cae06576-1e9b-489e-9c92-e8ac09866490",
00:16:04.279        "is_configured": true,
00:16:04.279        "data_offset": 2048,
00:16:04.279        "data_size": 63488
00:16:04.279      },
00:16:04.279      {
00:16:04.279        "name": "BaseBdev3",
00:16:04.279        "uuid": "e15a01b2-02b3-4ef3-a684-fd71f7a5c4e9",
00:16:04.279        "is_configured": true,
00:16:04.279        "data_offset": 2048,
00:16:04.279        "data_size": 63488
00:16:04.279      }
00:16:04.279    ]
00:16:04.279  }'
00:16:04.279   23:49:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:04.279   23:49:34	-- common/autotest_common.sh@10 -- # set +x
00:16:04.854   23:49:35	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:04.854   23:49:35	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:04.854    23:49:35	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:04.854    23:49:35	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:04.854   23:49:35	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:04.854   23:49:35	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:04.854   23:49:35	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:05.111  [2024-12-13 23:49:35.775444] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:05.370   23:49:35	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:05.370   23:49:35	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:05.370    23:49:35	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:05.370    23:49:35	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:05.629   23:49:36	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:05.629   23:49:36	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:05.629   23:49:36	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:05.629  [2024-12-13 23:49:36.340718] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:05.629  [2024-12-13 23:49:36.340776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:16:05.887   23:49:36	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:05.887   23:49:36	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:05.887    23:49:36	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:05.887    23:49:36	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:06.147   23:49:36	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:06.147   23:49:36	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:06.147   23:49:36	-- bdev/bdev_raid.sh@287 -- # killprocess 116285
00:16:06.147   23:49:36	-- common/autotest_common.sh@936 -- # '[' -z 116285 ']'
00:16:06.147   23:49:36	-- common/autotest_common.sh@940 -- # kill -0 116285
00:16:06.147    23:49:36	-- common/autotest_common.sh@941 -- # uname
00:16:06.147   23:49:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:06.147    23:49:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116285
00:16:06.147   23:49:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:06.147   23:49:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:06.147   23:49:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116285'
00:16:06.147  killing process with pid 116285
00:16:06.147   23:49:36	-- common/autotest_common.sh@955 -- # kill 116285
00:16:06.147  [2024-12-13 23:49:36.680076] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:06.147  [2024-12-13 23:49:36.680188] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:06.147   23:49:36	-- common/autotest_common.sh@960 -- # wait 116285
00:16:07.084  ************************************
00:16:07.084  END TEST raid_state_function_test_sb
00:16:07.084  ************************************
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:07.084  
00:16:07.084  real	0m12.253s
00:16:07.084  user	0m21.525s
00:16:07.084  sys	0m1.466s
00:16:07.084   23:49:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:07.084   23:49:37	-- common/autotest_common.sh@10 -- # set +x
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3
00:16:07.084   23:49:37	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:16:07.084   23:49:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:07.084   23:49:37	-- common/autotest_common.sh@10 -- # set +x
00:16:07.084  ************************************
00:16:07.084  START TEST raid_superblock_test
00:16:07.084  ************************************
00:16:07.084   23:49:37	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:16:07.084   23:49:37	-- bdev/bdev_raid.sh@357 -- # raid_pid=116664
00:16:07.085   23:49:37	-- bdev/bdev_raid.sh@358 -- # waitforlisten 116664 /var/tmp/spdk-raid.sock
00:16:07.085   23:49:37	-- common/autotest_common.sh@829 -- # '[' -z 116664 ']'
00:16:07.085   23:49:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:07.085   23:49:37	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:16:07.085  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:07.085   23:49:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:07.085   23:49:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:07.085   23:49:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:07.085   23:49:37	-- common/autotest_common.sh@10 -- # set +x
00:16:07.085  [2024-12-13 23:49:37.735770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:07.085  [2024-12-13 23:49:37.735972] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116664 ]
00:16:07.344  [2024-12-13 23:49:37.892056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:07.344  [2024-12-13 23:49:38.057904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:07.603  [2024-12-13 23:49:38.226496] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:08.171   23:49:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:08.171   23:49:38	-- common/autotest_common.sh@862 -- # return 0
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:16:08.171  malloc1
00:16:08.171   23:49:38	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:08.430  [2024-12-13 23:49:39.126679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:08.430  [2024-12-13 23:49:39.127178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:08.430  [2024-12-13 23:49:39.127330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:16:08.430  [2024-12-13 23:49:39.127550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:08.430  [2024-12-13 23:49:39.129807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:08.430  [2024-12-13 23:49:39.129976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:08.430  pt1
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:08.430   23:49:39	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:16:08.689  malloc2
00:16:08.948   23:49:39	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:08.948  [2024-12-13 23:49:39.668861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:08.948  [2024-12-13 23:49:39.669105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:08.948  [2024-12-13 23:49:39.669279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:16:08.948  [2024-12-13 23:49:39.669451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:08.948  [2024-12-13 23:49:39.671642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:08.948  [2024-12-13 23:49:39.671818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:08.948  pt2
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:16:09.207  malloc3
00:16:09.207   23:49:39	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:09.468  [2024-12-13 23:49:40.078423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:09.469  [2024-12-13 23:49:40.078584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:09.469  [2024-12-13 23:49:40.078745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:16:09.469  [2024-12-13 23:49:40.078931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:09.469  [2024-12-13 23:49:40.081226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:09.469  [2024-12-13 23:49:40.081372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:09.469  pt3
00:16:09.469   23:49:40	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:09.469   23:49:40	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:09.469   23:49:40	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:16:09.727  [2024-12-13 23:49:40.322507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:09.727  [2024-12-13 23:49:40.324319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:09.728  [2024-12-13 23:49:40.324388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:09.728  [2024-12-13 23:49:40.324558] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780
00:16:09.728  [2024-12-13 23:49:40.324588] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:09.728  [2024-12-13 23:49:40.324711] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:16:09.728  [2024-12-13 23:49:40.325063] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780
00:16:09.728  [2024-12-13 23:49:40.325086] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780
00:16:09.728  [2024-12-13 23:49:40.325238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:09.728   23:49:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:09.728    23:49:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:09.728    23:49:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:09.987   23:49:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:09.987    "name": "raid_bdev1",
00:16:09.987    "uuid": "b625cad2-0190-44e5-aa48-36f10255a460",
00:16:09.987    "strip_size_kb": 64,
00:16:09.987    "state": "online",
00:16:09.987    "raid_level": "concat",
00:16:09.987    "superblock": true,
00:16:09.987    "num_base_bdevs": 3,
00:16:09.987    "num_base_bdevs_discovered": 3,
00:16:09.987    "num_base_bdevs_operational": 3,
00:16:09.987    "base_bdevs_list": [
00:16:09.987      {
00:16:09.987        "name": "pt1",
00:16:09.987        "uuid": "c34f45b8-0d9b-5da1-a541-a0c487a63115",
00:16:09.987        "is_configured": true,
00:16:09.987        "data_offset": 2048,
00:16:09.987        "data_size": 63488
00:16:09.987      },
00:16:09.987      {
00:16:09.987        "name": "pt2",
00:16:09.987        "uuid": "f9eddba7-3142-5586-a982-14e946968076",
00:16:09.987        "is_configured": true,
00:16:09.987        "data_offset": 2048,
00:16:09.987        "data_size": 63488
00:16:09.987      },
00:16:09.987      {
00:16:09.987        "name": "pt3",
00:16:09.987        "uuid": "ee3b0981-1803-5f43-9413-223b90eab030",
00:16:09.987        "is_configured": true,
00:16:09.987        "data_offset": 2048,
00:16:09.987        "data_size": 63488
00:16:09.987      }
00:16:09.987    ]
00:16:09.987  }'
00:16:09.987   23:49:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:09.987   23:49:40	-- common/autotest_common.sh@10 -- # set +x
00:16:10.582    23:49:41	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:10.582    23:49:41	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:16:10.582  [2024-12-13 23:49:41.262762] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:10.582   23:49:41	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b625cad2-0190-44e5-aa48-36f10255a460
00:16:10.582   23:49:41	-- bdev/bdev_raid.sh@380 -- # '[' -z b625cad2-0190-44e5-aa48-36f10255a460 ']'
00:16:10.582   23:49:41	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:10.854  [2024-12-13 23:49:41.522657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:10.854  [2024-12-13 23:49:41.522682] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:10.854  [2024-12-13 23:49:41.522744] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:10.854  [2024-12-13 23:49:41.522800] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:10.854  [2024-12-13 23:49:41.522812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline
00:16:10.854    23:49:41	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:16:10.854    23:49:41	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:11.118   23:49:41	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:16:11.118   23:49:41	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:16:11.118   23:49:41	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:11.118   23:49:41	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:11.377   23:49:42	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:11.377   23:49:42	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:11.636   23:49:42	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:11.636   23:49:42	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:11.894    23:49:42	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:16:11.894    23:49:42	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:16:12.153   23:49:42	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:16:12.153   23:49:42	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:12.153   23:49:42	-- common/autotest_common.sh@650 -- # local es=0
00:16:12.153   23:49:42	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:12.153   23:49:42	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:12.153   23:49:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:12.153    23:49:42	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:12.153   23:49:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:12.153    23:49:42	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:12.153   23:49:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:12.153   23:49:42	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:12.153   23:49:42	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:16:12.153   23:49:42	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:12.153  [2024-12-13 23:49:42.838830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:16:12.153  [2024-12-13 23:49:42.840598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:16:12.153  [2024-12-13 23:49:42.840651] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:16:12.153  [2024-12-13 23:49:42.840701] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:16:12.153  [2024-12-13 23:49:42.841116] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:16:12.153  [2024-12-13 23:49:42.841281] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:16:12.153  [2024-12-13 23:49:42.841709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:12.153  [2024-12-13 23:49:42.841788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring
00:16:12.153  request:
00:16:12.153  {
00:16:12.153    "name": "raid_bdev1",
00:16:12.153    "raid_level": "concat",
00:16:12.153    "base_bdevs": [
00:16:12.153      "malloc1",
00:16:12.153      "malloc2",
00:16:12.153      "malloc3"
00:16:12.153    ],
00:16:12.153    "superblock": false,
00:16:12.153    "strip_size_kb": 64,
00:16:12.153    "method": "bdev_raid_create",
00:16:12.153    "req_id": 1
00:16:12.153  }
00:16:12.153  Got JSON-RPC error response
00:16:12.153  response:
00:16:12.153  {
00:16:12.153    "code": -17,
00:16:12.153    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:16:12.153  }
00:16:12.153   23:49:42	-- common/autotest_common.sh@653 -- # es=1
00:16:12.153   23:49:42	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:16:12.153   23:49:42	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:16:12.153   23:49:42	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:16:12.153    23:49:42	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:12.153    23:49:42	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:16:12.412   23:49:43	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:16:12.412   23:49:43	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:16:12.412   23:49:43	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:12.671  [2024-12-13 23:49:43.315202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:12.671  [2024-12-13 23:49:43.315510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:12.671  [2024-12-13 23:49:43.315664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:16:12.671  [2024-12-13 23:49:43.315841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:12.671  [2024-12-13 23:49:43.318487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:12.671  [2024-12-13 23:49:43.318658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:12.671  [2024-12-13 23:49:43.318918] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:12.671  [2024-12-13 23:49:43.318985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:12.671  pt1
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:12.671   23:49:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:12.671    23:49:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:12.671    23:49:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:12.930   23:49:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:12.930    "name": "raid_bdev1",
00:16:12.930    "uuid": "b625cad2-0190-44e5-aa48-36f10255a460",
00:16:12.930    "strip_size_kb": 64,
00:16:12.930    "state": "configuring",
00:16:12.930    "raid_level": "concat",
00:16:12.930    "superblock": true,
00:16:12.930    "num_base_bdevs": 3,
00:16:12.930    "num_base_bdevs_discovered": 1,
00:16:12.930    "num_base_bdevs_operational": 3,
00:16:12.930    "base_bdevs_list": [
00:16:12.930      {
00:16:12.930        "name": "pt1",
00:16:12.930        "uuid": "c34f45b8-0d9b-5da1-a541-a0c487a63115",
00:16:12.930        "is_configured": true,
00:16:12.930        "data_offset": 2048,
00:16:12.930        "data_size": 63488
00:16:12.930      },
00:16:12.930      {
00:16:12.930        "name": null,
00:16:12.930        "uuid": "f9eddba7-3142-5586-a982-14e946968076",
00:16:12.930        "is_configured": false,
00:16:12.930        "data_offset": 2048,
00:16:12.930        "data_size": 63488
00:16:12.930      },
00:16:12.930      {
00:16:12.930        "name": null,
00:16:12.930        "uuid": "ee3b0981-1803-5f43-9413-223b90eab030",
00:16:12.930        "is_configured": false,
00:16:12.930        "data_offset": 2048,
00:16:12.930        "data_size": 63488
00:16:12.930      }
00:16:12.930    ]
00:16:12.930  }'
00:16:12.930   23:49:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:12.930   23:49:43	-- common/autotest_common.sh@10 -- # set +x
00:16:13.497   23:49:44	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:16:13.497   23:49:44	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:13.755  [2024-12-13 23:49:44.307380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:13.755  [2024-12-13 23:49:44.307781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:13.755  [2024-12-13 23:49:44.307969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:16:13.755  [2024-12-13 23:49:44.308097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:13.755  [2024-12-13 23:49:44.308682] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:13.755  [2024-12-13 23:49:44.308858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:13.755  [2024-12-13 23:49:44.309106] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:13.756  [2024-12-13 23:49:44.309137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:13.756  pt2
00:16:13.756   23:49:44	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:14.014  [2024-12-13 23:49:44.563466] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:14.014   23:49:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:14.014    23:49:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:14.014    23:49:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:14.273   23:49:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:14.273    "name": "raid_bdev1",
00:16:14.273    "uuid": "b625cad2-0190-44e5-aa48-36f10255a460",
00:16:14.273    "strip_size_kb": 64,
00:16:14.273    "state": "configuring",
00:16:14.273    "raid_level": "concat",
00:16:14.273    "superblock": true,
00:16:14.273    "num_base_bdevs": 3,
00:16:14.273    "num_base_bdevs_discovered": 1,
00:16:14.273    "num_base_bdevs_operational": 3,
00:16:14.273    "base_bdevs_list": [
00:16:14.273      {
00:16:14.273        "name": "pt1",
00:16:14.273        "uuid": "c34f45b8-0d9b-5da1-a541-a0c487a63115",
00:16:14.273        "is_configured": true,
00:16:14.273        "data_offset": 2048,
00:16:14.273        "data_size": 63488
00:16:14.273      },
00:16:14.273      {
00:16:14.273        "name": null,
00:16:14.273        "uuid": "f9eddba7-3142-5586-a982-14e946968076",
00:16:14.273        "is_configured": false,
00:16:14.273        "data_offset": 2048,
00:16:14.273        "data_size": 63488
00:16:14.273      },
00:16:14.273      {
00:16:14.273        "name": null,
00:16:14.273        "uuid": "ee3b0981-1803-5f43-9413-223b90eab030",
00:16:14.273        "is_configured": false,
00:16:14.273        "data_offset": 2048,
00:16:14.273        "data_size": 63488
00:16:14.273      }
00:16:14.273    ]
00:16:14.273  }'
00:16:14.273   23:49:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:14.273   23:49:44	-- common/autotest_common.sh@10 -- # set +x
00:16:14.838   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:16:14.838   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:14.838   23:49:45	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:15.097  [2024-12-13 23:49:45.619649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:15.097  [2024-12-13 23:49:45.620097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.097  [2024-12-13 23:49:45.620259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:16:15.097  [2024-12-13 23:49:45.620399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.097  [2024-12-13 23:49:45.620989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.097  [2024-12-13 23:49:45.621165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:15.097  [2024-12-13 23:49:45.621400] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:15.097  [2024-12-13 23:49:45.621430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:15.097  pt2
00:16:15.097   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:15.097   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:15.097   23:49:45	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:15.363  [2024-12-13 23:49:45.867703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:15.363  [2024-12-13 23:49:45.867862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.363  [2024-12-13 23:49:45.868038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:16:15.363  [2024-12-13 23:49:45.868166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.363  [2024-12-13 23:49:45.868658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.363  [2024-12-13 23:49:45.868825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:15.363  [2024-12-13 23:49:45.869077] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:15.363  [2024-12-13 23:49:45.869104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:15.363  [2024-12-13 23:49:45.869223] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:16:15.363  [2024-12-13 23:49:45.869248] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:15.364  [2024-12-13 23:49:45.869356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:16:15.364  [2024-12-13 23:49:45.869678] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:16:15.364  [2024-12-13 23:49:45.869702] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:16:15.364  [2024-12-13 23:49:45.869832] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:15.364  pt3
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:15.364   23:49:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:15.364    23:49:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:15.364    23:49:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:15.622   23:49:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:15.623    "name": "raid_bdev1",
00:16:15.623    "uuid": "b625cad2-0190-44e5-aa48-36f10255a460",
00:16:15.623    "strip_size_kb": 64,
00:16:15.623    "state": "online",
00:16:15.623    "raid_level": "concat",
00:16:15.623    "superblock": true,
00:16:15.623    "num_base_bdevs": 3,
00:16:15.623    "num_base_bdevs_discovered": 3,
00:16:15.623    "num_base_bdevs_operational": 3,
00:16:15.623    "base_bdevs_list": [
00:16:15.623      {
00:16:15.623        "name": "pt1",
00:16:15.623        "uuid": "c34f45b8-0d9b-5da1-a541-a0c487a63115",
00:16:15.623        "is_configured": true,
00:16:15.623        "data_offset": 2048,
00:16:15.623        "data_size": 63488
00:16:15.623      },
00:16:15.623      {
00:16:15.623        "name": "pt2",
00:16:15.623        "uuid": "f9eddba7-3142-5586-a982-14e946968076",
00:16:15.623        "is_configured": true,
00:16:15.623        "data_offset": 2048,
00:16:15.623        "data_size": 63488
00:16:15.623      },
00:16:15.623      {
00:16:15.623        "name": "pt3",
00:16:15.623        "uuid": "ee3b0981-1803-5f43-9413-223b90eab030",
00:16:15.623        "is_configured": true,
00:16:15.623        "data_offset": 2048,
00:16:15.623        "data_size": 63488
00:16:15.623      }
00:16:15.623    ]
00:16:15.623  }'
00:16:15.623   23:49:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:15.623   23:49:46	-- common/autotest_common.sh@10 -- # set +x
00:16:16.190    23:49:46	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:16:16.190    23:49:46	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:16.190  [2024-12-13 23:49:46.892119] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:16.190   23:49:46	-- bdev/bdev_raid.sh@430 -- # '[' b625cad2-0190-44e5-aa48-36f10255a460 '!=' b625cad2-0190-44e5-aa48-36f10255a460 ']'
00:16:16.190   23:49:46	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:16:16.190   23:49:46	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:16.190   23:49:46	-- bdev/bdev_raid.sh@197 -- # return 1
00:16:16.190   23:49:46	-- bdev/bdev_raid.sh@511 -- # killprocess 116664
00:16:16.190   23:49:46	-- common/autotest_common.sh@936 -- # '[' -z 116664 ']'
00:16:16.190   23:49:46	-- common/autotest_common.sh@940 -- # kill -0 116664
00:16:16.190    23:49:46	-- common/autotest_common.sh@941 -- # uname
00:16:16.190   23:49:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:16.190    23:49:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116664
00:16:16.449   23:49:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:16.449   23:49:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:16.449   23:49:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116664'
00:16:16.449  killing process with pid 116664
00:16:16.449   23:49:46	-- common/autotest_common.sh@955 -- # kill 116664
00:16:16.449  [2024-12-13 23:49:46.935844] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:16.449  [2024-12-13 23:49:46.935926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:16.449  [2024-12-13 23:49:46.935982] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:16.449  [2024-12-13 23:49:46.935993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:16:16.449   23:49:46	-- common/autotest_common.sh@960 -- # wait 116664
00:16:16.449  [2024-12-13 23:49:47.143148] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:17.823  ************************************
00:16:17.823  END TEST raid_superblock_test
00:16:17.823  ************************************
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@513 -- # return 0
00:16:17.823  
00:16:17.823  real	0m10.478s
00:16:17.823  user	0m18.152s
00:16:17.823  sys	0m1.259s
00:16:17.823   23:49:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:17.823   23:49:48	-- common/autotest_common.sh@10 -- # set +x
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false
00:16:17.823   23:49:48	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:16:17.823   23:49:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:17.823   23:49:48	-- common/autotest_common.sh@10 -- # set +x
00:16:17.823  ************************************
00:16:17.823  START TEST raid_state_function_test
00:16:17.823  ************************************
00:16:17.823   23:49:48	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:17.823    23:49:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@226 -- # raid_pid=116975
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116975'
00:16:17.823  Process raid pid: 116975
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:16:17.823   23:49:48	-- bdev/bdev_raid.sh@228 -- # waitforlisten 116975 /var/tmp/spdk-raid.sock
00:16:17.823   23:49:48	-- common/autotest_common.sh@829 -- # '[' -z 116975 ']'
00:16:17.823   23:49:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:17.823   23:49:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:17.823  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:17.823   23:49:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:17.823   23:49:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:17.823   23:49:48	-- common/autotest_common.sh@10 -- # set +x
00:16:17.823  [2024-12-13 23:49:48.288238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:17.823  [2024-12-13 23:49:48.289328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:17.823  [2024-12-13 23:49:48.465903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:18.081  [2024-12-13 23:49:48.685091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:18.339  [2024-12-13 23:49:48.872394] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:18.597   23:49:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:18.597   23:49:49	-- common/autotest_common.sh@862 -- # return 0
00:16:18.597   23:49:49	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:18.856  [2024-12-13 23:49:49.409214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:18.856  [2024-12-13 23:49:49.409766] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:18.856  [2024-12-13 23:49:49.409797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:18.856  [2024-12-13 23:49:49.409986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:18.856  [2024-12-13 23:49:49.410027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:18.856  [2024-12-13 23:49:49.410209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:18.856   23:49:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:18.856    23:49:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:18.856    23:49:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:19.115   23:49:49	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:19.115    "name": "Existed_Raid",
00:16:19.115    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:19.115    "strip_size_kb": 0,
00:16:19.115    "state": "configuring",
00:16:19.115    "raid_level": "raid1",
00:16:19.115    "superblock": false,
00:16:19.115    "num_base_bdevs": 3,
00:16:19.115    "num_base_bdevs_discovered": 0,
00:16:19.115    "num_base_bdevs_operational": 3,
00:16:19.115    "base_bdevs_list": [
00:16:19.115      {
00:16:19.115        "name": "BaseBdev1",
00:16:19.115        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:19.115        "is_configured": false,
00:16:19.115        "data_offset": 0,
00:16:19.115        "data_size": 0
00:16:19.115      },
00:16:19.115      {
00:16:19.115        "name": "BaseBdev2",
00:16:19.115        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:19.115        "is_configured": false,
00:16:19.115        "data_offset": 0,
00:16:19.115        "data_size": 0
00:16:19.115      },
00:16:19.115      {
00:16:19.115        "name": "BaseBdev3",
00:16:19.115        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:19.115        "is_configured": false,
00:16:19.115        "data_offset": 0,
00:16:19.115        "data_size": 0
00:16:19.115      }
00:16:19.115    ]
00:16:19.115  }'
00:16:19.115   23:49:49	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:19.115   23:49:49	-- common/autotest_common.sh@10 -- # set +x
00:16:19.682   23:49:50	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:19.941  [2024-12-13 23:49:50.441261] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:19.941  [2024-12-13 23:49:50.441297] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:16:19.941   23:49:50	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:20.200  [2024-12-13 23:49:50.697307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:20.200  [2024-12-13 23:49:50.697590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:20.200  [2024-12-13 23:49:50.697609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:20.200  [2024-12-13 23:49:50.697737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:20.200  [2024-12-13 23:49:50.697753] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:20.200  [2024-12-13 23:49:50.697863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:20.200   23:49:50	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:20.200  [2024-12-13 23:49:50.918797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:20.200  BaseBdev1
00:16:20.459   23:49:50	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:16:20.459   23:49:50	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:20.459   23:49:50	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:20.459   23:49:50	-- common/autotest_common.sh@899 -- # local i
00:16:20.459   23:49:50	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:20.459   23:49:50	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:20.459   23:49:50	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:20.459   23:49:51	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:20.717  [
00:16:20.717    {
00:16:20.717      "name": "BaseBdev1",
00:16:20.717      "aliases": [
00:16:20.717        "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d"
00:16:20.717      ],
00:16:20.717      "product_name": "Malloc disk",
00:16:20.717      "block_size": 512,
00:16:20.717      "num_blocks": 65536,
00:16:20.717      "uuid": "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d",
00:16:20.717      "assigned_rate_limits": {
00:16:20.717        "rw_ios_per_sec": 0,
00:16:20.717        "rw_mbytes_per_sec": 0,
00:16:20.717        "r_mbytes_per_sec": 0,
00:16:20.717        "w_mbytes_per_sec": 0
00:16:20.717      },
00:16:20.717      "claimed": true,
00:16:20.717      "claim_type": "exclusive_write",
00:16:20.718      "zoned": false,
00:16:20.718      "supported_io_types": {
00:16:20.718        "read": true,
00:16:20.718        "write": true,
00:16:20.718        "unmap": true,
00:16:20.718        "write_zeroes": true,
00:16:20.718        "flush": true,
00:16:20.718        "reset": true,
00:16:20.718        "compare": false,
00:16:20.718        "compare_and_write": false,
00:16:20.718        "abort": true,
00:16:20.718        "nvme_admin": false,
00:16:20.718        "nvme_io": false
00:16:20.718      },
00:16:20.718      "memory_domains": [
00:16:20.718        {
00:16:20.718          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:20.718          "dma_device_type": 2
00:16:20.718        }
00:16:20.718      ],
00:16:20.718      "driver_specific": {}
00:16:20.718    }
00:16:20.718  ]
00:16:20.718   23:49:51	-- common/autotest_common.sh@905 -- # return 0
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:20.718   23:49:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:20.718    23:49:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:20.718    23:49:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:20.977   23:49:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:20.977    "name": "Existed_Raid",
00:16:20.977    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:20.977    "strip_size_kb": 0,
00:16:20.977    "state": "configuring",
00:16:20.977    "raid_level": "raid1",
00:16:20.977    "superblock": false,
00:16:20.977    "num_base_bdevs": 3,
00:16:20.977    "num_base_bdevs_discovered": 1,
00:16:20.977    "num_base_bdevs_operational": 3,
00:16:20.977    "base_bdevs_list": [
00:16:20.977      {
00:16:20.977        "name": "BaseBdev1",
00:16:20.977        "uuid": "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d",
00:16:20.977        "is_configured": true,
00:16:20.977        "data_offset": 0,
00:16:20.977        "data_size": 65536
00:16:20.977      },
00:16:20.977      {
00:16:20.977        "name": "BaseBdev2",
00:16:20.977        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:20.977        "is_configured": false,
00:16:20.977        "data_offset": 0,
00:16:20.977        "data_size": 0
00:16:20.977      },
00:16:20.977      {
00:16:20.977        "name": "BaseBdev3",
00:16:20.977        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:20.977        "is_configured": false,
00:16:20.977        "data_offset": 0,
00:16:20.977        "data_size": 0
00:16:20.977      }
00:16:20.977    ]
00:16:20.977  }'
00:16:20.977   23:49:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:20.977   23:49:51	-- common/autotest_common.sh@10 -- # set +x
00:16:21.545   23:49:52	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:21.803  [2024-12-13 23:49:52.415070] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:21.803  [2024-12-13 23:49:52.415241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:16:21.803   23:49:52	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:16:21.803   23:49:52	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:22.062  [2024-12-13 23:49:52.591140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:22.062  [2024-12-13 23:49:52.592962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:22.062  [2024-12-13 23:49:52.593532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:22.062  [2024-12-13 23:49:52.593752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:22.062  [2024-12-13 23:49:52.593921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:22.062   23:49:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:22.062    23:49:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:22.062    23:49:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:22.321   23:49:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:22.321    "name": "Existed_Raid",
00:16:22.321    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:22.321    "strip_size_kb": 0,
00:16:22.321    "state": "configuring",
00:16:22.321    "raid_level": "raid1",
00:16:22.321    "superblock": false,
00:16:22.321    "num_base_bdevs": 3,
00:16:22.321    "num_base_bdevs_discovered": 1,
00:16:22.321    "num_base_bdevs_operational": 3,
00:16:22.321    "base_bdevs_list": [
00:16:22.321      {
00:16:22.321        "name": "BaseBdev1",
00:16:22.321        "uuid": "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d",
00:16:22.321        "is_configured": true,
00:16:22.321        "data_offset": 0,
00:16:22.321        "data_size": 65536
00:16:22.321      },
00:16:22.321      {
00:16:22.321        "name": "BaseBdev2",
00:16:22.321        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:22.321        "is_configured": false,
00:16:22.321        "data_offset": 0,
00:16:22.321        "data_size": 0
00:16:22.321      },
00:16:22.321      {
00:16:22.321        "name": "BaseBdev3",
00:16:22.321        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:22.321        "is_configured": false,
00:16:22.321        "data_offset": 0,
00:16:22.321        "data_size": 0
00:16:22.321      }
00:16:22.321    ]
00:16:22.321  }'
00:16:22.321   23:49:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:22.321   23:49:52	-- common/autotest_common.sh@10 -- # set +x
00:16:22.888   23:49:53	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:23.147  [2024-12-13 23:49:53.706918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:23.147  BaseBdev2
00:16:23.147   23:49:53	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:23.147   23:49:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:23.147   23:49:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:23.147   23:49:53	-- common/autotest_common.sh@899 -- # local i
00:16:23.147   23:49:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:23.147   23:49:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:23.147   23:49:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:23.406   23:49:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:23.406  [
00:16:23.406    {
00:16:23.406      "name": "BaseBdev2",
00:16:23.406      "aliases": [
00:16:23.406        "d63fd232-ca8a-4648-8d00-9dda98abc617"
00:16:23.406      ],
00:16:23.406      "product_name": "Malloc disk",
00:16:23.406      "block_size": 512,
00:16:23.406      "num_blocks": 65536,
00:16:23.406      "uuid": "d63fd232-ca8a-4648-8d00-9dda98abc617",
00:16:23.406      "assigned_rate_limits": {
00:16:23.406        "rw_ios_per_sec": 0,
00:16:23.406        "rw_mbytes_per_sec": 0,
00:16:23.406        "r_mbytes_per_sec": 0,
00:16:23.406        "w_mbytes_per_sec": 0
00:16:23.406      },
00:16:23.406      "claimed": true,
00:16:23.406      "claim_type": "exclusive_write",
00:16:23.406      "zoned": false,
00:16:23.406      "supported_io_types": {
00:16:23.406        "read": true,
00:16:23.406        "write": true,
00:16:23.406        "unmap": true,
00:16:23.406        "write_zeroes": true,
00:16:23.406        "flush": true,
00:16:23.406        "reset": true,
00:16:23.406        "compare": false,
00:16:23.406        "compare_and_write": false,
00:16:23.406        "abort": true,
00:16:23.406        "nvme_admin": false,
00:16:23.406        "nvme_io": false
00:16:23.406      },
00:16:23.406      "memory_domains": [
00:16:23.406        {
00:16:23.406          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:23.406          "dma_device_type": 2
00:16:23.406        }
00:16:23.406      ],
00:16:23.406      "driver_specific": {}
00:16:23.406    }
00:16:23.406  ]
00:16:23.406   23:49:54	-- common/autotest_common.sh@905 -- # return 0
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:23.406   23:49:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:23.406    23:49:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:23.406    23:49:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:23.665   23:49:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:23.665    "name": "Existed_Raid",
00:16:23.665    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:23.665    "strip_size_kb": 0,
00:16:23.665    "state": "configuring",
00:16:23.665    "raid_level": "raid1",
00:16:23.665    "superblock": false,
00:16:23.665    "num_base_bdevs": 3,
00:16:23.665    "num_base_bdevs_discovered": 2,
00:16:23.665    "num_base_bdevs_operational": 3,
00:16:23.665    "base_bdevs_list": [
00:16:23.665      {
00:16:23.665        "name": "BaseBdev1",
00:16:23.665        "uuid": "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d",
00:16:23.665        "is_configured": true,
00:16:23.665        "data_offset": 0,
00:16:23.665        "data_size": 65536
00:16:23.665      },
00:16:23.665      {
00:16:23.665        "name": "BaseBdev2",
00:16:23.665        "uuid": "d63fd232-ca8a-4648-8d00-9dda98abc617",
00:16:23.665        "is_configured": true,
00:16:23.665        "data_offset": 0,
00:16:23.665        "data_size": 65536
00:16:23.665      },
00:16:23.665      {
00:16:23.665        "name": "BaseBdev3",
00:16:23.665        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:23.665        "is_configured": false,
00:16:23.665        "data_offset": 0,
00:16:23.665        "data_size": 0
00:16:23.665      }
00:16:23.665    ]
00:16:23.665  }'
00:16:23.665   23:49:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:23.665   23:49:54	-- common/autotest_common.sh@10 -- # set +x
00:16:24.232   23:49:54	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:24.495  [2024-12-13 23:49:55.154567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:24.495  [2024-12-13 23:49:55.154801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:16:24.495  [2024-12-13 23:49:55.154845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:16:24.495  [2024-12-13 23:49:55.155044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:16:24.495  [2024-12-13 23:49:55.155558] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:16:24.495  [2024-12-13 23:49:55.155695] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:16:24.495  [2024-12-13 23:49:55.156038] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:24.495  BaseBdev3
00:16:24.495   23:49:55	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:24.495   23:49:55	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:24.495   23:49:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:24.495   23:49:55	-- common/autotest_common.sh@899 -- # local i
00:16:24.495   23:49:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:24.495   23:49:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:24.495   23:49:55	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:24.754   23:49:55	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:25.013  [
00:16:25.013    {
00:16:25.013      "name": "BaseBdev3",
00:16:25.013      "aliases": [
00:16:25.013        "f8063a29-47d2-43c6-a547-fef1a6eb8195"
00:16:25.013      ],
00:16:25.013      "product_name": "Malloc disk",
00:16:25.013      "block_size": 512,
00:16:25.013      "num_blocks": 65536,
00:16:25.013      "uuid": "f8063a29-47d2-43c6-a547-fef1a6eb8195",
00:16:25.013      "assigned_rate_limits": {
00:16:25.013        "rw_ios_per_sec": 0,
00:16:25.013        "rw_mbytes_per_sec": 0,
00:16:25.013        "r_mbytes_per_sec": 0,
00:16:25.013        "w_mbytes_per_sec": 0
00:16:25.013      },
00:16:25.013      "claimed": true,
00:16:25.013      "claim_type": "exclusive_write",
00:16:25.013      "zoned": false,
00:16:25.013      "supported_io_types": {
00:16:25.013        "read": true,
00:16:25.013        "write": true,
00:16:25.013        "unmap": true,
00:16:25.013        "write_zeroes": true,
00:16:25.013        "flush": true,
00:16:25.013        "reset": true,
00:16:25.013        "compare": false,
00:16:25.013        "compare_and_write": false,
00:16:25.013        "abort": true,
00:16:25.013        "nvme_admin": false,
00:16:25.013        "nvme_io": false
00:16:25.013      },
00:16:25.013      "memory_domains": [
00:16:25.013        {
00:16:25.013          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:25.013          "dma_device_type": 2
00:16:25.013        }
00:16:25.013      ],
00:16:25.013      "driver_specific": {}
00:16:25.013    }
00:16:25.013  ]
00:16:25.013   23:49:55	-- common/autotest_common.sh@905 -- # return 0
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:25.013   23:49:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:25.013    23:49:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:25.013    23:49:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:25.271   23:49:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:25.271    "name": "Existed_Raid",
00:16:25.271    "uuid": "4cf161d3-fc0c-4de8-a245-e6b4d3d7ef36",
00:16:25.271    "strip_size_kb": 0,
00:16:25.271    "state": "online",
00:16:25.271    "raid_level": "raid1",
00:16:25.271    "superblock": false,
00:16:25.271    "num_base_bdevs": 3,
00:16:25.271    "num_base_bdevs_discovered": 3,
00:16:25.271    "num_base_bdevs_operational": 3,
00:16:25.271    "base_bdevs_list": [
00:16:25.271      {
00:16:25.271        "name": "BaseBdev1",
00:16:25.271        "uuid": "7f852cac-fa3f-4dbf-bcc6-35d517ab3c2d",
00:16:25.271        "is_configured": true,
00:16:25.272        "data_offset": 0,
00:16:25.272        "data_size": 65536
00:16:25.272      },
00:16:25.272      {
00:16:25.272        "name": "BaseBdev2",
00:16:25.272        "uuid": "d63fd232-ca8a-4648-8d00-9dda98abc617",
00:16:25.272        "is_configured": true,
00:16:25.272        "data_offset": 0,
00:16:25.272        "data_size": 65536
00:16:25.272      },
00:16:25.272      {
00:16:25.272        "name": "BaseBdev3",
00:16:25.272        "uuid": "f8063a29-47d2-43c6-a547-fef1a6eb8195",
00:16:25.272        "is_configured": true,
00:16:25.272        "data_offset": 0,
00:16:25.272        "data_size": 65536
00:16:25.272      }
00:16:25.272    ]
00:16:25.272  }'
00:16:25.272   23:49:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:25.272   23:49:55	-- common/autotest_common.sh@10 -- # set +x
00:16:25.839   23:49:56	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:26.097  [2024-12-13 23:49:56.755081] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@196 -- # return 0
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:26.356   23:49:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:26.356    23:49:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:26.356    23:49:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:26.356   23:49:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:26.356    "name": "Existed_Raid",
00:16:26.356    "uuid": "4cf161d3-fc0c-4de8-a245-e6b4d3d7ef36",
00:16:26.356    "strip_size_kb": 0,
00:16:26.356    "state": "online",
00:16:26.356    "raid_level": "raid1",
00:16:26.356    "superblock": false,
00:16:26.356    "num_base_bdevs": 3,
00:16:26.356    "num_base_bdevs_discovered": 2,
00:16:26.356    "num_base_bdevs_operational": 2,
00:16:26.356    "base_bdevs_list": [
00:16:26.356      {
00:16:26.356        "name": null,
00:16:26.356        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:26.356        "is_configured": false,
00:16:26.356        "data_offset": 0,
00:16:26.356        "data_size": 65536
00:16:26.356      },
00:16:26.356      {
00:16:26.356        "name": "BaseBdev2",
00:16:26.356        "uuid": "d63fd232-ca8a-4648-8d00-9dda98abc617",
00:16:26.356        "is_configured": true,
00:16:26.356        "data_offset": 0,
00:16:26.356        "data_size": 65536
00:16:26.356      },
00:16:26.356      {
00:16:26.356        "name": "BaseBdev3",
00:16:26.356        "uuid": "f8063a29-47d2-43c6-a547-fef1a6eb8195",
00:16:26.356        "is_configured": true,
00:16:26.356        "data_offset": 0,
00:16:26.356        "data_size": 65536
00:16:26.356      }
00:16:26.356    ]
00:16:26.356  }'
00:16:26.356   23:49:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:26.356   23:49:57	-- common/autotest_common.sh@10 -- # set +x
00:16:27.291   23:49:57	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:27.291   23:49:57	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:27.291    23:49:57	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:27.291    23:49:57	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:27.291   23:49:57	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:27.291   23:49:57	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:27.291   23:49:57	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:27.550  [2024-12-13 23:49:58.065723] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:27.550   23:49:58	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:27.550   23:49:58	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:27.550    23:49:58	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:27.550    23:49:58	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:27.808   23:49:58	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:27.808   23:49:58	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:27.808   23:49:58	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:28.067  [2024-12-13 23:49:58.548664] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:28.067  [2024-12-13 23:49:58.548827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:28.067  [2024-12-13 23:49:58.548995] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:28.067  [2024-12-13 23:49:58.618811] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:28.067  [2024-12-13 23:49:58.618963] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:16:28.067   23:49:58	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:28.067   23:49:58	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:28.067    23:49:58	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:28.067    23:49:58	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:28.326   23:49:58	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:28.326   23:49:58	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:28.326   23:49:58	-- bdev/bdev_raid.sh@287 -- # killprocess 116975
00:16:28.326   23:49:58	-- common/autotest_common.sh@936 -- # '[' -z 116975 ']'
00:16:28.326   23:49:58	-- common/autotest_common.sh@940 -- # kill -0 116975
00:16:28.326    23:49:58	-- common/autotest_common.sh@941 -- # uname
00:16:28.326   23:49:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:28.326    23:49:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116975
00:16:28.326  killing process with pid 116975
00:16:28.326   23:49:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:28.326   23:49:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:28.326   23:49:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116975'
00:16:28.326   23:49:58	-- common/autotest_common.sh@955 -- # kill 116975
00:16:28.326  [2024-12-13 23:49:58.901575] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:28.326  [2024-12-13 23:49:58.901734] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:28.326   23:49:58	-- common/autotest_common.sh@960 -- # wait 116975
00:16:29.263  ************************************
00:16:29.263  END TEST raid_state_function_test
00:16:29.263  ************************************
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:29.264  
00:16:29.264  real	0m11.620s
00:16:29.264  user	0m20.427s
00:16:29.264  sys	0m1.390s
00:16:29.264   23:49:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:29.264   23:49:59	-- common/autotest_common.sh@10 -- # set +x
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true
00:16:29.264   23:49:59	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:16:29.264   23:49:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:29.264   23:49:59	-- common/autotest_common.sh@10 -- # set +x
00:16:29.264  ************************************
00:16:29.264  START TEST raid_state_function_test_sb
00:16:29.264  ************************************
00:16:29.264   23:49:59	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:29.264    23:49:59	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@226 -- # raid_pid=117352
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117352'
00:16:29.264  Process raid pid: 117352
00:16:29.264   23:49:59	-- bdev/bdev_raid.sh@228 -- # waitforlisten 117352 /var/tmp/spdk-raid.sock
00:16:29.264   23:49:59	-- common/autotest_common.sh@829 -- # '[' -z 117352 ']'
00:16:29.264   23:49:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:29.264   23:49:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:29.264   23:49:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:29.264  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:29.264   23:49:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:29.264   23:49:59	-- common/autotest_common.sh@10 -- # set +x
00:16:29.264  [2024-12-13 23:49:59.973673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:29.264  [2024-12-13 23:49:59.974083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:29.523  [2024-12-13 23:50:00.142334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:29.784  [2024-12-13 23:50:00.322316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:29.784  [2024-12-13 23:50:00.511062] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:30.380   23:50:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:30.380   23:50:00	-- common/autotest_common.sh@862 -- # return 0
00:16:30.380   23:50:00	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:30.639  [2024-12-13 23:50:01.142004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:30.639  [2024-12-13 23:50:01.142466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:30.639  [2024-12-13 23:50:01.142619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:30.639  [2024-12-13 23:50:01.142779] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:30.639  [2024-12-13 23:50:01.142922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:30.639  [2024-12-13 23:50:01.143098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:30.639   23:50:01	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:30.639    23:50:01	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:30.639    23:50:01	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:30.898   23:50:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:30.898    "name": "Existed_Raid",
00:16:30.898    "uuid": "f0c4bff9-b6a7-44e0-9e19-e19eee2e611d",
00:16:30.898    "strip_size_kb": 0,
00:16:30.898    "state": "configuring",
00:16:30.898    "raid_level": "raid1",
00:16:30.898    "superblock": true,
00:16:30.898    "num_base_bdevs": 3,
00:16:30.898    "num_base_bdevs_discovered": 0,
00:16:30.898    "num_base_bdevs_operational": 3,
00:16:30.898    "base_bdevs_list": [
00:16:30.898      {
00:16:30.898        "name": "BaseBdev1",
00:16:30.898        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.898        "is_configured": false,
00:16:30.898        "data_offset": 0,
00:16:30.898        "data_size": 0
00:16:30.898      },
00:16:30.898      {
00:16:30.898        "name": "BaseBdev2",
00:16:30.898        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.898        "is_configured": false,
00:16:30.898        "data_offset": 0,
00:16:30.898        "data_size": 0
00:16:30.898      },
00:16:30.898      {
00:16:30.898        "name": "BaseBdev3",
00:16:30.898        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.898        "is_configured": false,
00:16:30.898        "data_offset": 0,
00:16:30.898        "data_size": 0
00:16:30.898      }
00:16:30.898    ]
00:16:30.898  }'
00:16:30.898   23:50:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:30.898   23:50:01	-- common/autotest_common.sh@10 -- # set +x
00:16:31.464   23:50:01	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:31.723  [2024-12-13 23:50:02.230059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:31.723  [2024-12-13 23:50:02.230215] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:16:31.723   23:50:02	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:31.982  [2024-12-13 23:50:02.458128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:31.982  [2024-12-13 23:50:02.458525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:31.982  [2024-12-13 23:50:02.458650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:31.982  [2024-12-13 23:50:02.458807] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:31.982  [2024-12-13 23:50:02.459004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:31.982  [2024-12-13 23:50:02.459153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:31.982   23:50:02	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:31.982  [2024-12-13 23:50:02.680522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:31.982  BaseBdev1
00:16:31.982   23:50:02	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:16:31.982   23:50:02	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:31.982   23:50:02	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:31.982   23:50:02	-- common/autotest_common.sh@899 -- # local i
00:16:31.982   23:50:02	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:31.982   23:50:02	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:31.982   23:50:02	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:32.240   23:50:02	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:32.499  [
00:16:32.499    {
00:16:32.499      "name": "BaseBdev1",
00:16:32.499      "aliases": [
00:16:32.499        "2d159f4b-6854-4595-b354-9b90ca2af00c"
00:16:32.499      ],
00:16:32.499      "product_name": "Malloc disk",
00:16:32.499      "block_size": 512,
00:16:32.499      "num_blocks": 65536,
00:16:32.499      "uuid": "2d159f4b-6854-4595-b354-9b90ca2af00c",
00:16:32.499      "assigned_rate_limits": {
00:16:32.499        "rw_ios_per_sec": 0,
00:16:32.499        "rw_mbytes_per_sec": 0,
00:16:32.499        "r_mbytes_per_sec": 0,
00:16:32.499        "w_mbytes_per_sec": 0
00:16:32.499      },
00:16:32.499      "claimed": true,
00:16:32.499      "claim_type": "exclusive_write",
00:16:32.499      "zoned": false,
00:16:32.499      "supported_io_types": {
00:16:32.499        "read": true,
00:16:32.499        "write": true,
00:16:32.499        "unmap": true,
00:16:32.499        "write_zeroes": true,
00:16:32.499        "flush": true,
00:16:32.499        "reset": true,
00:16:32.499        "compare": false,
00:16:32.499        "compare_and_write": false,
00:16:32.499        "abort": true,
00:16:32.499        "nvme_admin": false,
00:16:32.499        "nvme_io": false
00:16:32.499      },
00:16:32.499      "memory_domains": [
00:16:32.499        {
00:16:32.499          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:32.499          "dma_device_type": 2
00:16:32.499        }
00:16:32.499      ],
00:16:32.499      "driver_specific": {}
00:16:32.499    }
00:16:32.499  ]
00:16:32.499   23:50:03	-- common/autotest_common.sh@905 -- # return 0
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:32.499   23:50:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:32.499    23:50:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:32.499    23:50:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:32.758   23:50:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:32.758    "name": "Existed_Raid",
00:16:32.758    "uuid": "c567b31a-7be5-4841-8507-b13a7a864ed4",
00:16:32.758    "strip_size_kb": 0,
00:16:32.758    "state": "configuring",
00:16:32.758    "raid_level": "raid1",
00:16:32.758    "superblock": true,
00:16:32.758    "num_base_bdevs": 3,
00:16:32.758    "num_base_bdevs_discovered": 1,
00:16:32.758    "num_base_bdevs_operational": 3,
00:16:32.758    "base_bdevs_list": [
00:16:32.758      {
00:16:32.758        "name": "BaseBdev1",
00:16:32.758        "uuid": "2d159f4b-6854-4595-b354-9b90ca2af00c",
00:16:32.758        "is_configured": true,
00:16:32.758        "data_offset": 2048,
00:16:32.758        "data_size": 63488
00:16:32.758      },
00:16:32.758      {
00:16:32.758        "name": "BaseBdev2",
00:16:32.758        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:32.758        "is_configured": false,
00:16:32.758        "data_offset": 0,
00:16:32.758        "data_size": 0
00:16:32.758      },
00:16:32.758      {
00:16:32.758        "name": "BaseBdev3",
00:16:32.758        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:32.758        "is_configured": false,
00:16:32.758        "data_offset": 0,
00:16:32.758        "data_size": 0
00:16:32.758      }
00:16:32.758    ]
00:16:32.758  }'
00:16:32.758   23:50:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:32.758   23:50:03	-- common/autotest_common.sh@10 -- # set +x
00:16:33.325   23:50:03	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:33.584  [2024-12-13 23:50:04.204784] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:33.584  [2024-12-13 23:50:04.204953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:16:33.584   23:50:04	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:16:33.584   23:50:04	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:33.842   23:50:04	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:34.100  BaseBdev1
00:16:34.100   23:50:04	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:16:34.100   23:50:04	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:34.100   23:50:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:34.100   23:50:04	-- common/autotest_common.sh@899 -- # local i
00:16:34.100   23:50:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:34.100   23:50:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:34.100   23:50:04	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:34.358   23:50:04	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:34.619  [
00:16:34.619    {
00:16:34.619      "name": "BaseBdev1",
00:16:34.619      "aliases": [
00:16:34.619        "701217c2-1eac-42b2-8939-b7382e90e0d4"
00:16:34.619      ],
00:16:34.619      "product_name": "Malloc disk",
00:16:34.619      "block_size": 512,
00:16:34.619      "num_blocks": 65536,
00:16:34.619      "uuid": "701217c2-1eac-42b2-8939-b7382e90e0d4",
00:16:34.619      "assigned_rate_limits": {
00:16:34.619        "rw_ios_per_sec": 0,
00:16:34.619        "rw_mbytes_per_sec": 0,
00:16:34.619        "r_mbytes_per_sec": 0,
00:16:34.619        "w_mbytes_per_sec": 0
00:16:34.619      },
00:16:34.619      "claimed": false,
00:16:34.619      "zoned": false,
00:16:34.619      "supported_io_types": {
00:16:34.619        "read": true,
00:16:34.619        "write": true,
00:16:34.619        "unmap": true,
00:16:34.619        "write_zeroes": true,
00:16:34.619        "flush": true,
00:16:34.619        "reset": true,
00:16:34.619        "compare": false,
00:16:34.619        "compare_and_write": false,
00:16:34.619        "abort": true,
00:16:34.619        "nvme_admin": false,
00:16:34.619        "nvme_io": false
00:16:34.619      },
00:16:34.619      "memory_domains": [
00:16:34.619        {
00:16:34.619          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:34.619          "dma_device_type": 2
00:16:34.619        }
00:16:34.619      ],
00:16:34.619      "driver_specific": {}
00:16:34.619    }
00:16:34.619  ]
00:16:34.619   23:50:05	-- common/autotest_common.sh@905 -- # return 0
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:34.619  [2024-12-13 23:50:05.300494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:34.619  [2024-12-13 23:50:05.302633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:34.619  [2024-12-13 23:50:05.303222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:34.619  [2024-12-13 23:50:05.303356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:34.619  [2024-12-13 23:50:05.303559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:34.619   23:50:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:34.620   23:50:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:34.620   23:50:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:34.620   23:50:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:34.620   23:50:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:34.620    23:50:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:34.620    23:50:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:34.879   23:50:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:34.879    "name": "Existed_Raid",
00:16:34.879    "uuid": "2a2f8821-ce7d-4620-92c4-eabfb8480961",
00:16:34.879    "strip_size_kb": 0,
00:16:34.879    "state": "configuring",
00:16:34.879    "raid_level": "raid1",
00:16:34.879    "superblock": true,
00:16:34.879    "num_base_bdevs": 3,
00:16:34.879    "num_base_bdevs_discovered": 1,
00:16:34.879    "num_base_bdevs_operational": 3,
00:16:34.879    "base_bdevs_list": [
00:16:34.879      {
00:16:34.879        "name": "BaseBdev1",
00:16:34.879        "uuid": "701217c2-1eac-42b2-8939-b7382e90e0d4",
00:16:34.879        "is_configured": true,
00:16:34.879        "data_offset": 2048,
00:16:34.879        "data_size": 63488
00:16:34.879      },
00:16:34.879      {
00:16:34.879        "name": "BaseBdev2",
00:16:34.879        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:34.879        "is_configured": false,
00:16:34.879        "data_offset": 0,
00:16:34.879        "data_size": 0
00:16:34.879      },
00:16:34.879      {
00:16:34.879        "name": "BaseBdev3",
00:16:34.879        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:34.879        "is_configured": false,
00:16:34.879        "data_offset": 0,
00:16:34.879        "data_size": 0
00:16:34.879      }
00:16:34.879    ]
00:16:34.879  }'
00:16:34.879   23:50:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:34.879   23:50:05	-- common/autotest_common.sh@10 -- # set +x
00:16:35.446   23:50:06	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:35.705  [2024-12-13 23:50:06.327061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:35.705  BaseBdev2
00:16:35.705   23:50:06	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:35.705   23:50:06	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:35.705   23:50:06	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:35.705   23:50:06	-- common/autotest_common.sh@899 -- # local i
00:16:35.705   23:50:06	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:35.705   23:50:06	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:35.705   23:50:06	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:35.963   23:50:06	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:36.222  [
00:16:36.222    {
00:16:36.222      "name": "BaseBdev2",
00:16:36.222      "aliases": [
00:16:36.222        "51f1c2c1-8743-4d2b-b2b3-5efea8f26372"
00:16:36.222      ],
00:16:36.222      "product_name": "Malloc disk",
00:16:36.222      "block_size": 512,
00:16:36.222      "num_blocks": 65536,
00:16:36.222      "uuid": "51f1c2c1-8743-4d2b-b2b3-5efea8f26372",
00:16:36.222      "assigned_rate_limits": {
00:16:36.222        "rw_ios_per_sec": 0,
00:16:36.222        "rw_mbytes_per_sec": 0,
00:16:36.222        "r_mbytes_per_sec": 0,
00:16:36.222        "w_mbytes_per_sec": 0
00:16:36.222      },
00:16:36.222      "claimed": true,
00:16:36.222      "claim_type": "exclusive_write",
00:16:36.222      "zoned": false,
00:16:36.222      "supported_io_types": {
00:16:36.222        "read": true,
00:16:36.222        "write": true,
00:16:36.222        "unmap": true,
00:16:36.222        "write_zeroes": true,
00:16:36.222        "flush": true,
00:16:36.222        "reset": true,
00:16:36.222        "compare": false,
00:16:36.222        "compare_and_write": false,
00:16:36.222        "abort": true,
00:16:36.222        "nvme_admin": false,
00:16:36.222        "nvme_io": false
00:16:36.222      },
00:16:36.222      "memory_domains": [
00:16:36.222        {
00:16:36.222          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:36.222          "dma_device_type": 2
00:16:36.222        }
00:16:36.222      ],
00:16:36.222      "driver_specific": {}
00:16:36.222    }
00:16:36.222  ]
00:16:36.222   23:50:06	-- common/autotest_common.sh@905 -- # return 0
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:36.222   23:50:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:36.222    23:50:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:36.222    23:50:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:36.480   23:50:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:36.480    "name": "Existed_Raid",
00:16:36.480    "uuid": "2a2f8821-ce7d-4620-92c4-eabfb8480961",
00:16:36.480    "strip_size_kb": 0,
00:16:36.480    "state": "configuring",
00:16:36.480    "raid_level": "raid1",
00:16:36.480    "superblock": true,
00:16:36.480    "num_base_bdevs": 3,
00:16:36.480    "num_base_bdevs_discovered": 2,
00:16:36.480    "num_base_bdevs_operational": 3,
00:16:36.480    "base_bdevs_list": [
00:16:36.480      {
00:16:36.480        "name": "BaseBdev1",
00:16:36.480        "uuid": "701217c2-1eac-42b2-8939-b7382e90e0d4",
00:16:36.480        "is_configured": true,
00:16:36.480        "data_offset": 2048,
00:16:36.480        "data_size": 63488
00:16:36.480      },
00:16:36.480      {
00:16:36.480        "name": "BaseBdev2",
00:16:36.480        "uuid": "51f1c2c1-8743-4d2b-b2b3-5efea8f26372",
00:16:36.480        "is_configured": true,
00:16:36.480        "data_offset": 2048,
00:16:36.480        "data_size": 63488
00:16:36.481      },
00:16:36.481      {
00:16:36.481        "name": "BaseBdev3",
00:16:36.481        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:36.481        "is_configured": false,
00:16:36.481        "data_offset": 0,
00:16:36.481        "data_size": 0
00:16:36.481      }
00:16:36.481    ]
00:16:36.481  }'
00:16:36.481   23:50:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:36.481   23:50:07	-- common/autotest_common.sh@10 -- # set +x
00:16:37.046   23:50:07	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:37.305  [2024-12-13 23:50:07.854804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:37.305  [2024-12-13 23:50:07.855282] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:16:37.305  [2024-12-13 23:50:07.855405] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:37.305  [2024-12-13 23:50:07.855602] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:16:37.305  BaseBdev3
00:16:37.305  [2024-12-13 23:50:07.856205] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:16:37.305  [2024-12-13 23:50:07.856221] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:16:37.305  [2024-12-13 23:50:07.856380] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:37.305   23:50:07	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:37.305   23:50:07	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:37.306   23:50:07	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:37.306   23:50:07	-- common/autotest_common.sh@899 -- # local i
00:16:37.306   23:50:07	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:37.306   23:50:07	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:37.306   23:50:07	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:37.564   23:50:08	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:37.823  [
00:16:37.823    {
00:16:37.823      "name": "BaseBdev3",
00:16:37.823      "aliases": [
00:16:37.823        "b8246475-9a7f-4688-a711-e3763105b113"
00:16:37.823      ],
00:16:37.823      "product_name": "Malloc disk",
00:16:37.823      "block_size": 512,
00:16:37.823      "num_blocks": 65536,
00:16:37.823      "uuid": "b8246475-9a7f-4688-a711-e3763105b113",
00:16:37.823      "assigned_rate_limits": {
00:16:37.823        "rw_ios_per_sec": 0,
00:16:37.823        "rw_mbytes_per_sec": 0,
00:16:37.823        "r_mbytes_per_sec": 0,
00:16:37.823        "w_mbytes_per_sec": 0
00:16:37.823      },
00:16:37.823      "claimed": true,
00:16:37.823      "claim_type": "exclusive_write",
00:16:37.823      "zoned": false,
00:16:37.823      "supported_io_types": {
00:16:37.823        "read": true,
00:16:37.823        "write": true,
00:16:37.823        "unmap": true,
00:16:37.823        "write_zeroes": true,
00:16:37.823        "flush": true,
00:16:37.823        "reset": true,
00:16:37.823        "compare": false,
00:16:37.823        "compare_and_write": false,
00:16:37.823        "abort": true,
00:16:37.823        "nvme_admin": false,
00:16:37.823        "nvme_io": false
00:16:37.823      },
00:16:37.823      "memory_domains": [
00:16:37.823        {
00:16:37.823          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:37.823          "dma_device_type": 2
00:16:37.823        }
00:16:37.823      ],
00:16:37.823      "driver_specific": {}
00:16:37.823    }
00:16:37.823  ]
00:16:37.823   23:50:08	-- common/autotest_common.sh@905 -- # return 0
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:37.823   23:50:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:37.823    23:50:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:37.823    23:50:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:38.082   23:50:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:38.082    "name": "Existed_Raid",
00:16:38.082    "uuid": "2a2f8821-ce7d-4620-92c4-eabfb8480961",
00:16:38.082    "strip_size_kb": 0,
00:16:38.082    "state": "online",
00:16:38.082    "raid_level": "raid1",
00:16:38.082    "superblock": true,
00:16:38.082    "num_base_bdevs": 3,
00:16:38.082    "num_base_bdevs_discovered": 3,
00:16:38.082    "num_base_bdevs_operational": 3,
00:16:38.082    "base_bdevs_list": [
00:16:38.082      {
00:16:38.082        "name": "BaseBdev1",
00:16:38.082        "uuid": "701217c2-1eac-42b2-8939-b7382e90e0d4",
00:16:38.082        "is_configured": true,
00:16:38.082        "data_offset": 2048,
00:16:38.082        "data_size": 63488
00:16:38.082      },
00:16:38.082      {
00:16:38.082        "name": "BaseBdev2",
00:16:38.082        "uuid": "51f1c2c1-8743-4d2b-b2b3-5efea8f26372",
00:16:38.082        "is_configured": true,
00:16:38.082        "data_offset": 2048,
00:16:38.082        "data_size": 63488
00:16:38.082      },
00:16:38.082      {
00:16:38.082        "name": "BaseBdev3",
00:16:38.082        "uuid": "b8246475-9a7f-4688-a711-e3763105b113",
00:16:38.082        "is_configured": true,
00:16:38.082        "data_offset": 2048,
00:16:38.082        "data_size": 63488
00:16:38.082      }
00:16:38.082    ]
00:16:38.082  }'
00:16:38.082   23:50:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:38.082   23:50:08	-- common/autotest_common.sh@10 -- # set +x
00:16:38.648   23:50:09	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:38.648  [2024-12-13 23:50:09.353298] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@196 -- # return 0
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:38.906   23:50:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:38.906    23:50:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:38.906    23:50:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:39.164   23:50:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:39.165    "name": "Existed_Raid",
00:16:39.165    "uuid": "2a2f8821-ce7d-4620-92c4-eabfb8480961",
00:16:39.165    "strip_size_kb": 0,
00:16:39.165    "state": "online",
00:16:39.165    "raid_level": "raid1",
00:16:39.165    "superblock": true,
00:16:39.165    "num_base_bdevs": 3,
00:16:39.165    "num_base_bdevs_discovered": 2,
00:16:39.165    "num_base_bdevs_operational": 2,
00:16:39.165    "base_bdevs_list": [
00:16:39.165      {
00:16:39.165        "name": null,
00:16:39.165        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:39.165        "is_configured": false,
00:16:39.165        "data_offset": 2048,
00:16:39.165        "data_size": 63488
00:16:39.165      },
00:16:39.165      {
00:16:39.165        "name": "BaseBdev2",
00:16:39.165        "uuid": "51f1c2c1-8743-4d2b-b2b3-5efea8f26372",
00:16:39.165        "is_configured": true,
00:16:39.165        "data_offset": 2048,
00:16:39.165        "data_size": 63488
00:16:39.165      },
00:16:39.165      {
00:16:39.165        "name": "BaseBdev3",
00:16:39.165        "uuid": "b8246475-9a7f-4688-a711-e3763105b113",
00:16:39.165        "is_configured": true,
00:16:39.165        "data_offset": 2048,
00:16:39.165        "data_size": 63488
00:16:39.165      }
00:16:39.165    ]
00:16:39.165  }'
00:16:39.165   23:50:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:39.165   23:50:09	-- common/autotest_common.sh@10 -- # set +x
00:16:39.740   23:50:10	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:39.740   23:50:10	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:39.740    23:50:10	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:39.740    23:50:10	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:39.999   23:50:10	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:39.999   23:50:10	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:39.999   23:50:10	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:40.257  [2024-12-13 23:50:10.756828] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:40.257   23:50:10	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:40.257   23:50:10	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:40.257    23:50:10	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:40.257    23:50:10	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:40.515   23:50:11	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:40.515   23:50:11	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:40.515   23:50:11	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:40.773  [2024-12-13 23:50:11.268154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:40.773  [2024-12-13 23:50:11.268315] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:40.773  [2024-12-13 23:50:11.268471] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:40.773  [2024-12-13 23:50:11.335268] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:40.773  [2024-12-13 23:50:11.335472] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:16:40.773   23:50:11	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:40.773   23:50:11	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:40.773    23:50:11	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:40.773    23:50:11	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:41.032   23:50:11	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:41.032   23:50:11	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:41.032   23:50:11	-- bdev/bdev_raid.sh@287 -- # killprocess 117352
00:16:41.032   23:50:11	-- common/autotest_common.sh@936 -- # '[' -z 117352 ']'
00:16:41.032   23:50:11	-- common/autotest_common.sh@940 -- # kill -0 117352
00:16:41.032    23:50:11	-- common/autotest_common.sh@941 -- # uname
00:16:41.032   23:50:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:41.032    23:50:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117352
00:16:41.032   23:50:11	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:41.032   23:50:11	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:41.032   23:50:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117352'
00:16:41.032  killing process with pid 117352
00:16:41.032   23:50:11	-- common/autotest_common.sh@955 -- # kill 117352
00:16:41.032  [2024-12-13 23:50:11.625241] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:41.032  [2024-12-13 23:50:11.625349] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:41.032   23:50:11	-- common/autotest_common.sh@960 -- # wait 117352
00:16:41.968  ************************************
00:16:41.968  END TEST raid_state_function_test_sb
00:16:41.968  ************************************
00:16:41.968   23:50:12	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:41.968  
00:16:41.968  real	0m12.751s
00:16:41.968  user	0m22.327s
00:16:41.968  sys	0m1.623s
00:16:41.968   23:50:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:41.968   23:50:12	-- common/autotest_common.sh@10 -- # set +x
00:16:41.968   23:50:12	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3
00:16:41.968   23:50:12	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:16:41.968   23:50:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:41.968   23:50:12	-- common/autotest_common.sh@10 -- # set +x
00:16:42.227  ************************************
00:16:42.227  START TEST raid_superblock_test
00:16:42.227  ************************************
00:16:42.227   23:50:12	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@357 -- # raid_pid=117742
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@358 -- # waitforlisten 117742 /var/tmp/spdk-raid.sock
00:16:42.227   23:50:12	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:16:42.227   23:50:12	-- common/autotest_common.sh@829 -- # '[' -z 117742 ']'
00:16:42.227   23:50:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:42.227   23:50:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:42.227   23:50:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:42.227  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:42.227   23:50:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:42.227   23:50:12	-- common/autotest_common.sh@10 -- # set +x
00:16:42.227  [2024-12-13 23:50:12.778706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:42.227  [2024-12-13 23:50:12.778887] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117742 ]
00:16:42.227  [2024-12-13 23:50:12.945843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:42.486  [2024-12-13 23:50:13.122979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:42.745  [2024-12-13 23:50:13.307546] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:43.003   23:50:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:43.003   23:50:13	-- common/autotest_common.sh@862 -- # return 0
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:43.003   23:50:13	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:16:43.262  malloc1
00:16:43.262   23:50:13	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:43.521  [2024-12-13 23:50:14.202629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:43.521  [2024-12-13 23:50:14.203171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:43.521  [2024-12-13 23:50:14.203368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:16:43.521  [2024-12-13 23:50:14.203539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:43.521  [2024-12-13 23:50:14.205879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:43.521  [2024-12-13 23:50:14.206045] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:43.521  pt1
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:43.521   23:50:14	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:16:43.780  malloc2
00:16:43.780   23:50:14	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:44.038  [2024-12-13 23:50:14.698089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:44.038  [2024-12-13 23:50:14.698377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:44.038  [2024-12-13 23:50:14.698537] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:16:44.038  [2024-12-13 23:50:14.698707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:44.038  [2024-12-13 23:50:14.700992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:44.038  [2024-12-13 23:50:14.701134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:44.038  pt2
00:16:44.038   23:50:14	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:44.038   23:50:14	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:44.039   23:50:14	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:16:44.297  malloc3
00:16:44.297   23:50:14	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:44.557  [2024-12-13 23:50:15.115235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:44.557  [2024-12-13 23:50:15.115511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:44.557  [2024-12-13 23:50:15.115686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:16:44.557  [2024-12-13 23:50:15.115856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:44.557  [2024-12-13 23:50:15.118091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:44.557  [2024-12-13 23:50:15.118255] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:44.557  pt3
00:16:44.557   23:50:15	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:44.557   23:50:15	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:44.557   23:50:15	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:16:44.815  [2024-12-13 23:50:15.307288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:44.815  [2024-12-13 23:50:15.309242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:44.815  [2024-12-13 23:50:15.309311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:44.815  [2024-12-13 23:50:15.309514] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780
00:16:44.815  [2024-12-13 23:50:15.309527] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:44.815  [2024-12-13 23:50:15.309649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:16:44.815  [2024-12-13 23:50:15.310047] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780
00:16:44.815  [2024-12-13 23:50:15.310068] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780
00:16:44.815  [2024-12-13 23:50:15.310221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:44.815    23:50:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:44.815    23:50:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:44.815    "name": "raid_bdev1",
00:16:44.815    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:44.815    "strip_size_kb": 0,
00:16:44.815    "state": "online",
00:16:44.815    "raid_level": "raid1",
00:16:44.815    "superblock": true,
00:16:44.815    "num_base_bdevs": 3,
00:16:44.815    "num_base_bdevs_discovered": 3,
00:16:44.815    "num_base_bdevs_operational": 3,
00:16:44.815    "base_bdevs_list": [
00:16:44.815      {
00:16:44.815        "name": "pt1",
00:16:44.815        "uuid": "47aef0bc-f6fe-5e23-a952-08ce42a56135",
00:16:44.815        "is_configured": true,
00:16:44.815        "data_offset": 2048,
00:16:44.815        "data_size": 63488
00:16:44.815      },
00:16:44.815      {
00:16:44.815        "name": "pt2",
00:16:44.815        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:44.815        "is_configured": true,
00:16:44.815        "data_offset": 2048,
00:16:44.815        "data_size": 63488
00:16:44.815      },
00:16:44.815      {
00:16:44.815        "name": "pt3",
00:16:44.815        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:44.815        "is_configured": true,
00:16:44.815        "data_offset": 2048,
00:16:44.815        "data_size": 63488
00:16:44.815      }
00:16:44.815    ]
00:16:44.815  }'
00:16:44.815   23:50:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:44.815   23:50:15	-- common/autotest_common.sh@10 -- # set +x
00:16:45.382    23:50:16	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:45.382    23:50:16	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:16:45.640  [2024-12-13 23:50:16.275558] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:45.640   23:50:16	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=db92398c-76b0-4df4-bac0-203952987521
00:16:45.640   23:50:16	-- bdev/bdev_raid.sh@380 -- # '[' -z db92398c-76b0-4df4-bac0-203952987521 ']'
00:16:45.640   23:50:16	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:45.899  [2024-12-13 23:50:16.519417] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:45.899  [2024-12-13 23:50:16.519440] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:45.899  [2024-12-13 23:50:16.519500] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:45.899  [2024-12-13 23:50:16.519574] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:45.899  [2024-12-13 23:50:16.519585] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline
00:16:45.899    23:50:16	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:45.899    23:50:16	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:16:46.157   23:50:16	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:16:46.157   23:50:16	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:16:46.157   23:50:16	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:46.157   23:50:16	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:46.416   23:50:16	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:46.416   23:50:16	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:46.416   23:50:17	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:46.416   23:50:17	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:46.674    23:50:17	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:16:46.674    23:50:17	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:16:46.933   23:50:17	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:16:46.933   23:50:17	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:46.933   23:50:17	-- common/autotest_common.sh@650 -- # local es=0
00:16:46.933   23:50:17	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:46.933   23:50:17	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:46.933   23:50:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:46.933    23:50:17	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:46.933   23:50:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:46.933    23:50:17	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:46.933   23:50:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:46.933   23:50:17	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:46.933   23:50:17	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:16:46.933   23:50:17	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:47.191  [2024-12-13 23:50:17.715618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:16:47.191  [2024-12-13 23:50:17.717556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:16:47.191  [2024-12-13 23:50:17.717671] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:16:47.191  [2024-12-13 23:50:17.717723] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:16:47.191  [2024-12-13 23:50:17.718133] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:16:47.191  [2024-12-13 23:50:17.718327] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:16:47.191  [2024-12-13 23:50:17.718491] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:47.191  [2024-12-13 23:50:17.718509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring
00:16:47.191  request:
00:16:47.191  {
00:16:47.191    "name": "raid_bdev1",
00:16:47.191    "raid_level": "raid1",
00:16:47.191    "base_bdevs": [
00:16:47.191      "malloc1",
00:16:47.191      "malloc2",
00:16:47.191      "malloc3"
00:16:47.191    ],
00:16:47.191    "superblock": false,
00:16:47.191    "method": "bdev_raid_create",
00:16:47.191    "req_id": 1
00:16:47.191  }
00:16:47.191  Got JSON-RPC error response
00:16:47.191  response:
00:16:47.191  {
00:16:47.191    "code": -17,
00:16:47.191    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:16:47.191  }
00:16:47.191   23:50:17	-- common/autotest_common.sh@653 -- # es=1
00:16:47.191   23:50:17	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:16:47.191   23:50:17	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:16:47.191   23:50:17	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:16:47.191    23:50:17	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:16:47.191    23:50:17	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:47.450   23:50:17	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:16:47.450   23:50:17	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:16:47.450   23:50:17	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:47.709  [2024-12-13 23:50:18.187655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:47.709  [2024-12-13 23:50:18.187877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:47.709  [2024-12-13 23:50:18.188041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:16:47.709  [2024-12-13 23:50:18.188188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:47.709  [2024-12-13 23:50:18.190515] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:47.709  [2024-12-13 23:50:18.190676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:47.709  [2024-12-13 23:50:18.190893] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:47.709  [2024-12-13 23:50:18.190955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:47.709  pt1
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:47.709    23:50:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:47.709    23:50:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:47.709    "name": "raid_bdev1",
00:16:47.709    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:47.709    "strip_size_kb": 0,
00:16:47.709    "state": "configuring",
00:16:47.709    "raid_level": "raid1",
00:16:47.709    "superblock": true,
00:16:47.709    "num_base_bdevs": 3,
00:16:47.709    "num_base_bdevs_discovered": 1,
00:16:47.709    "num_base_bdevs_operational": 3,
00:16:47.709    "base_bdevs_list": [
00:16:47.709      {
00:16:47.709        "name": "pt1",
00:16:47.709        "uuid": "47aef0bc-f6fe-5e23-a952-08ce42a56135",
00:16:47.709        "is_configured": true,
00:16:47.709        "data_offset": 2048,
00:16:47.709        "data_size": 63488
00:16:47.709      },
00:16:47.709      {
00:16:47.709        "name": null,
00:16:47.709        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:47.709        "is_configured": false,
00:16:47.709        "data_offset": 2048,
00:16:47.709        "data_size": 63488
00:16:47.709      },
00:16:47.709      {
00:16:47.709        "name": null,
00:16:47.709        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:47.709        "is_configured": false,
00:16:47.709        "data_offset": 2048,
00:16:47.709        "data_size": 63488
00:16:47.709      }
00:16:47.709    ]
00:16:47.709  }'
00:16:47.709   23:50:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:47.709   23:50:18	-- common/autotest_common.sh@10 -- # set +x
00:16:48.276   23:50:18	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:16:48.276   23:50:18	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:48.535  [2024-12-13 23:50:19.171838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:48.535  [2024-12-13 23:50:19.172148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:48.535  [2024-12-13 23:50:19.172292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:16:48.535  [2024-12-13 23:50:19.172434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:48.535  [2024-12-13 23:50:19.172946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:48.535  [2024-12-13 23:50:19.173095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:48.535  [2024-12-13 23:50:19.173310] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:48.535  [2024-12-13 23:50:19.173336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:48.535  pt2
00:16:48.535   23:50:19	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:48.835  [2024-12-13 23:50:19.439911] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:48.835   23:50:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:48.835    23:50:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:48.835    23:50:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:49.110   23:50:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:49.110    "name": "raid_bdev1",
00:16:49.110    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:49.110    "strip_size_kb": 0,
00:16:49.110    "state": "configuring",
00:16:49.110    "raid_level": "raid1",
00:16:49.110    "superblock": true,
00:16:49.110    "num_base_bdevs": 3,
00:16:49.110    "num_base_bdevs_discovered": 1,
00:16:49.110    "num_base_bdevs_operational": 3,
00:16:49.110    "base_bdevs_list": [
00:16:49.110      {
00:16:49.110        "name": "pt1",
00:16:49.110        "uuid": "47aef0bc-f6fe-5e23-a952-08ce42a56135",
00:16:49.110        "is_configured": true,
00:16:49.110        "data_offset": 2048,
00:16:49.110        "data_size": 63488
00:16:49.110      },
00:16:49.110      {
00:16:49.110        "name": null,
00:16:49.110        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:49.110        "is_configured": false,
00:16:49.110        "data_offset": 2048,
00:16:49.110        "data_size": 63488
00:16:49.110      },
00:16:49.110      {
00:16:49.110        "name": null,
00:16:49.110        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:49.110        "is_configured": false,
00:16:49.110        "data_offset": 2048,
00:16:49.110        "data_size": 63488
00:16:49.110      }
00:16:49.110    ]
00:16:49.110  }'
00:16:49.110   23:50:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:49.110   23:50:19	-- common/autotest_common.sh@10 -- # set +x
00:16:49.677   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:16:49.677   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:49.677   23:50:20	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:49.935  [2024-12-13 23:50:20.464078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:49.935  [2024-12-13 23:50:20.464353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:49.935  [2024-12-13 23:50:20.464518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:16:49.935  [2024-12-13 23:50:20.464678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:49.935  [2024-12-13 23:50:20.465165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:49.935  [2024-12-13 23:50:20.465345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:49.935  [2024-12-13 23:50:20.465566] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:49.935  [2024-12-13 23:50:20.465606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:49.935  pt2
00:16:49.935   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:49.935   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:49.935   23:50:20	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:50.194  [2024-12-13 23:50:20.704123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:50.194  [2024-12-13 23:50:20.704272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:50.194  [2024-12-13 23:50:20.704420] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:16:50.194  [2024-12-13 23:50:20.704544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:50.194  [2024-12-13 23:50:20.705053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:50.194  [2024-12-13 23:50:20.705200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:50.194  [2024-12-13 23:50:20.705438] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:50.194  [2024-12-13 23:50:20.705463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:50.194  [2024-12-13 23:50:20.705588] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:16:50.194  [2024-12-13 23:50:20.705625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:50.194  [2024-12-13 23:50:20.705722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:16:50.194  [2024-12-13 23:50:20.706052] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:16:50.194  [2024-12-13 23:50:20.706075] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:16:50.194  [2024-12-13 23:50:20.706193] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:50.194  pt3
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:50.194   23:50:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:50.194    23:50:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:50.194    23:50:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:50.453   23:50:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:50.453    "name": "raid_bdev1",
00:16:50.453    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:50.453    "strip_size_kb": 0,
00:16:50.453    "state": "online",
00:16:50.453    "raid_level": "raid1",
00:16:50.453    "superblock": true,
00:16:50.453    "num_base_bdevs": 3,
00:16:50.453    "num_base_bdevs_discovered": 3,
00:16:50.453    "num_base_bdevs_operational": 3,
00:16:50.453    "base_bdevs_list": [
00:16:50.453      {
00:16:50.453        "name": "pt1",
00:16:50.453        "uuid": "47aef0bc-f6fe-5e23-a952-08ce42a56135",
00:16:50.453        "is_configured": true,
00:16:50.453        "data_offset": 2048,
00:16:50.453        "data_size": 63488
00:16:50.453      },
00:16:50.453      {
00:16:50.453        "name": "pt2",
00:16:50.453        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:50.453        "is_configured": true,
00:16:50.453        "data_offset": 2048,
00:16:50.453        "data_size": 63488
00:16:50.453      },
00:16:50.453      {
00:16:50.453        "name": "pt3",
00:16:50.453        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:50.453        "is_configured": true,
00:16:50.453        "data_offset": 2048,
00:16:50.453        "data_size": 63488
00:16:50.453      }
00:16:50.453    ]
00:16:50.453  }'
00:16:50.453   23:50:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:50.453   23:50:20	-- common/autotest_common.sh@10 -- # set +x
00:16:51.019    23:50:21	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:51.019    23:50:21	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:16:51.019  [2024-12-13 23:50:21.745290] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@430 -- # '[' db92398c-76b0-4df4-bac0-203952987521 '!=' db92398c-76b0-4df4-bac0-203952987521 ']'
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@196 -- # return 0
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:51.278  [2024-12-13 23:50:21.933154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:51.278   23:50:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:51.279   23:50:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:51.279    23:50:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:51.279    23:50:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:51.538   23:50:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:51.538    "name": "raid_bdev1",
00:16:51.538    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:51.538    "strip_size_kb": 0,
00:16:51.538    "state": "online",
00:16:51.538    "raid_level": "raid1",
00:16:51.538    "superblock": true,
00:16:51.538    "num_base_bdevs": 3,
00:16:51.538    "num_base_bdevs_discovered": 2,
00:16:51.538    "num_base_bdevs_operational": 2,
00:16:51.538    "base_bdevs_list": [
00:16:51.538      {
00:16:51.538        "name": null,
00:16:51.538        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:51.538        "is_configured": false,
00:16:51.538        "data_offset": 2048,
00:16:51.538        "data_size": 63488
00:16:51.538      },
00:16:51.538      {
00:16:51.538        "name": "pt2",
00:16:51.538        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:51.538        "is_configured": true,
00:16:51.538        "data_offset": 2048,
00:16:51.538        "data_size": 63488
00:16:51.538      },
00:16:51.538      {
00:16:51.538        "name": "pt3",
00:16:51.538        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:51.538        "is_configured": true,
00:16:51.538        "data_offset": 2048,
00:16:51.538        "data_size": 63488
00:16:51.538      }
00:16:51.538    ]
00:16:51.538  }'
00:16:51.538   23:50:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:51.538   23:50:22	-- common/autotest_common.sh@10 -- # set +x
00:16:52.105   23:50:22	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:52.363  [2024-12-13 23:50:23.050033] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:52.363  [2024-12-13 23:50:23.050057] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:52.363  [2024-12-13 23:50:23.050099] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:52.363  [2024-12-13 23:50:23.050151] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:52.363  [2024-12-13 23:50:23.050161] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:16:52.363    23:50:23	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:52.363    23:50:23	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:16:52.622   23:50:23	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:16:52.622   23:50:23	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:16:52.622   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:16:52.622   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:52.622   23:50:23	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:52.880   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:16:52.880   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:52.880   23:50:23	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:53.139   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:16:53.139   23:50:23	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:53.139   23:50:23	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:16:53.139   23:50:23	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:16:53.139   23:50:23	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:53.398  [2024-12-13 23:50:23.898162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:53.398  [2024-12-13 23:50:23.898212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:53.398  [2024-12-13 23:50:23.898243] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:16:53.398  [2024-12-13 23:50:23.898267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:53.398  [2024-12-13 23:50:23.900323] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:53.398  [2024-12-13 23:50:23.900370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:53.398  [2024-12-13 23:50:23.900466] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:53.398  [2024-12-13 23:50:23.900508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:53.398  pt2
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:53.398   23:50:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:53.398    23:50:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:53.398    23:50:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:53.657   23:50:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:53.657    "name": "raid_bdev1",
00:16:53.657    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:53.657    "strip_size_kb": 0,
00:16:53.657    "state": "configuring",
00:16:53.657    "raid_level": "raid1",
00:16:53.657    "superblock": true,
00:16:53.657    "num_base_bdevs": 3,
00:16:53.657    "num_base_bdevs_discovered": 1,
00:16:53.657    "num_base_bdevs_operational": 2,
00:16:53.657    "base_bdevs_list": [
00:16:53.657      {
00:16:53.657        "name": null,
00:16:53.657        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:53.657        "is_configured": false,
00:16:53.657        "data_offset": 2048,
00:16:53.657        "data_size": 63488
00:16:53.657      },
00:16:53.657      {
00:16:53.657        "name": "pt2",
00:16:53.657        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:53.657        "is_configured": true,
00:16:53.657        "data_offset": 2048,
00:16:53.657        "data_size": 63488
00:16:53.657      },
00:16:53.657      {
00:16:53.657        "name": null,
00:16:53.657        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:53.657        "is_configured": false,
00:16:53.657        "data_offset": 2048,
00:16:53.657        "data_size": 63488
00:16:53.657      }
00:16:53.657    ]
00:16:53.657  }'
00:16:53.657   23:50:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:53.657   23:50:24	-- common/autotest_common.sh@10 -- # set +x
00:16:54.225   23:50:24	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:16:54.225   23:50:24	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:16:54.225   23:50:24	-- bdev/bdev_raid.sh@462 -- # i=2
00:16:54.225   23:50:24	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:54.483  [2024-12-13 23:50:24.966361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:54.483  [2024-12-13 23:50:24.966422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:54.483  [2024-12-13 23:50:24.966459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:16:54.483  [2024-12-13 23:50:24.966483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:54.483  [2024-12-13 23:50:24.966844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:54.483  [2024-12-13 23:50:24.966882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:54.483  [2024-12-13 23:50:24.966974] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:54.483  [2024-12-13 23:50:24.966994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:54.484  [2024-12-13 23:50:24.967082] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80
00:16:54.484  [2024-12-13 23:50:24.967094] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:54.484  [2024-12-13 23:50:24.967171] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:16:54.484  [2024-12-13 23:50:24.967477] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80
00:16:54.484  [2024-12-13 23:50:24.967499] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80
00:16:54.484  [2024-12-13 23:50:24.967619] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:54.484  pt3
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:54.484   23:50:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:54.484    23:50:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:54.484    23:50:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:54.484   23:50:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:54.484    "name": "raid_bdev1",
00:16:54.484    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:54.484    "strip_size_kb": 0,
00:16:54.484    "state": "online",
00:16:54.484    "raid_level": "raid1",
00:16:54.484    "superblock": true,
00:16:54.484    "num_base_bdevs": 3,
00:16:54.484    "num_base_bdevs_discovered": 2,
00:16:54.484    "num_base_bdevs_operational": 2,
00:16:54.484    "base_bdevs_list": [
00:16:54.484      {
00:16:54.484        "name": null,
00:16:54.484        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:54.484        "is_configured": false,
00:16:54.484        "data_offset": 2048,
00:16:54.484        "data_size": 63488
00:16:54.484      },
00:16:54.484      {
00:16:54.484        "name": "pt2",
00:16:54.484        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:54.484        "is_configured": true,
00:16:54.484        "data_offset": 2048,
00:16:54.484        "data_size": 63488
00:16:54.484      },
00:16:54.484      {
00:16:54.484        "name": "pt3",
00:16:54.484        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:54.484        "is_configured": true,
00:16:54.484        "data_offset": 2048,
00:16:54.484        "data_size": 63488
00:16:54.484      }
00:16:54.484    ]
00:16:54.484  }'
00:16:54.484   23:50:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:54.484   23:50:25	-- common/autotest_common.sh@10 -- # set +x
00:16:55.051   23:50:25	-- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']'
00:16:55.051   23:50:25	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:55.310  [2024-12-13 23:50:25.922505] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:55.310  [2024-12-13 23:50:25.922530] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:55.310  [2024-12-13 23:50:25.922567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:55.310  [2024-12-13 23:50:25.922612] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:55.310  [2024-12-13 23:50:25.922621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline
00:16:55.310    23:50:25	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:55.310    23:50:25	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:16:55.568   23:50:26	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:16:55.568   23:50:26	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:16:55.568   23:50:26	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:55.827  [2024-12-13 23:50:26.414758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:55.827  [2024-12-13 23:50:26.414813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:55.827  [2024-12-13 23:50:26.414845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:16:55.827  [2024-12-13 23:50:26.414869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:55.827  [2024-12-13 23:50:26.416745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:55.827  [2024-12-13 23:50:26.416792] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:55.827  [2024-12-13 23:50:26.416883] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:55.827  [2024-12-13 23:50:26.416922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:55.827  pt1
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:55.827   23:50:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:55.827    23:50:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:55.827    23:50:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:56.086   23:50:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:56.086    "name": "raid_bdev1",
00:16:56.086    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:56.086    "strip_size_kb": 0,
00:16:56.086    "state": "configuring",
00:16:56.086    "raid_level": "raid1",
00:16:56.086    "superblock": true,
00:16:56.086    "num_base_bdevs": 3,
00:16:56.086    "num_base_bdevs_discovered": 1,
00:16:56.086    "num_base_bdevs_operational": 3,
00:16:56.086    "base_bdevs_list": [
00:16:56.086      {
00:16:56.086        "name": "pt1",
00:16:56.086        "uuid": "47aef0bc-f6fe-5e23-a952-08ce42a56135",
00:16:56.086        "is_configured": true,
00:16:56.086        "data_offset": 2048,
00:16:56.086        "data_size": 63488
00:16:56.086      },
00:16:56.086      {
00:16:56.086        "name": null,
00:16:56.086        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:56.086        "is_configured": false,
00:16:56.086        "data_offset": 2048,
00:16:56.086        "data_size": 63488
00:16:56.086      },
00:16:56.086      {
00:16:56.086        "name": null,
00:16:56.086        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:56.086        "is_configured": false,
00:16:56.086        "data_offset": 2048,
00:16:56.086        "data_size": 63488
00:16:56.086      }
00:16:56.086    ]
00:16:56.086  }'
00:16:56.086   23:50:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:56.086   23:50:26	-- common/autotest_common.sh@10 -- # set +x
00:16:56.653   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:16:56.653   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:56.653   23:50:27	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:56.912   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:16:56.912   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:56.912   23:50:27	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:57.170   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:16:57.170   23:50:27	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:57.170   23:50:27	-- bdev/bdev_raid.sh@489 -- # i=2
00:16:57.170   23:50:27	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:57.429  [2024-12-13 23:50:27.939059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:57.429  [2024-12-13 23:50:27.939111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:57.429  [2024-12-13 23:50:27.939138] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:16:57.429  [2024-12-13 23:50:27.939160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:57.429  [2024-12-13 23:50:27.939500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:57.429  [2024-12-13 23:50:27.939532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:57.429  [2024-12-13 23:50:27.939625] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:57.429  [2024-12-13 23:50:27.939638] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2)
00:16:57.429  [2024-12-13 23:50:27.939644] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:57.429  [2024-12-13 23:50:27.939658] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring
00:16:57.429  [2024-12-13 23:50:27.939717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:57.429  pt3
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:57.429   23:50:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:57.429    23:50:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:57.429    23:50:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:57.688   23:50:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:57.688    "name": "raid_bdev1",
00:16:57.688    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:57.688    "strip_size_kb": 0,
00:16:57.688    "state": "configuring",
00:16:57.688    "raid_level": "raid1",
00:16:57.688    "superblock": true,
00:16:57.688    "num_base_bdevs": 3,
00:16:57.688    "num_base_bdevs_discovered": 1,
00:16:57.688    "num_base_bdevs_operational": 2,
00:16:57.688    "base_bdevs_list": [
00:16:57.688      {
00:16:57.688        "name": null,
00:16:57.688        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:57.688        "is_configured": false,
00:16:57.688        "data_offset": 2048,
00:16:57.688        "data_size": 63488
00:16:57.688      },
00:16:57.688      {
00:16:57.688        "name": null,
00:16:57.688        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:57.688        "is_configured": false,
00:16:57.688        "data_offset": 2048,
00:16:57.688        "data_size": 63488
00:16:57.688      },
00:16:57.688      {
00:16:57.688        "name": "pt3",
00:16:57.688        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:57.688        "is_configured": true,
00:16:57.688        "data_offset": 2048,
00:16:57.688        "data_size": 63488
00:16:57.688      }
00:16:57.688    ]
00:16:57.688  }'
00:16:57.688   23:50:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:57.688   23:50:28	-- common/autotest_common.sh@10 -- # set +x
00:16:58.256   23:50:28	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:16:58.256   23:50:28	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:16:58.256   23:50:28	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:58.514  [2024-12-13 23:50:29.079660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:58.514  [2024-12-13 23:50:29.079722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:58.514  [2024-12-13 23:50:29.079751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:16:58.514  [2024-12-13 23:50:29.079774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:58.514  [2024-12-13 23:50:29.080112] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:58.514  [2024-12-13 23:50:29.080153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:58.514  [2024-12-13 23:50:29.080225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:58.514  [2024-12-13 23:50:29.080244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:58.514  [2024-12-13 23:50:29.080341] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80
00:16:58.514  [2024-12-13 23:50:29.080353] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:58.515  [2024-12-13 23:50:29.080451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0
00:16:58.515  [2024-12-13 23:50:29.080737] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80
00:16:58.515  [2024-12-13 23:50:29.080758] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80
00:16:58.515  [2024-12-13 23:50:29.080868] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:58.515  pt2
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:58.515   23:50:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:58.515    23:50:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:58.515    23:50:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:58.773   23:50:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:58.773    "name": "raid_bdev1",
00:16:58.773    "uuid": "db92398c-76b0-4df4-bac0-203952987521",
00:16:58.773    "strip_size_kb": 0,
00:16:58.773    "state": "online",
00:16:58.773    "raid_level": "raid1",
00:16:58.773    "superblock": true,
00:16:58.773    "num_base_bdevs": 3,
00:16:58.773    "num_base_bdevs_discovered": 2,
00:16:58.773    "num_base_bdevs_operational": 2,
00:16:58.773    "base_bdevs_list": [
00:16:58.773      {
00:16:58.773        "name": null,
00:16:58.773        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:58.773        "is_configured": false,
00:16:58.773        "data_offset": 2048,
00:16:58.773        "data_size": 63488
00:16:58.773      },
00:16:58.773      {
00:16:58.773        "name": "pt2",
00:16:58.773        "uuid": "1bcd29a2-b8a5-563e-8337-d4fc56325cc7",
00:16:58.773        "is_configured": true,
00:16:58.773        "data_offset": 2048,
00:16:58.773        "data_size": 63488
00:16:58.773      },
00:16:58.773      {
00:16:58.773        "name": "pt3",
00:16:58.773        "uuid": "9b499895-a0bf-51cb-ade8-c7ae18f641fd",
00:16:58.773        "is_configured": true,
00:16:58.773        "data_offset": 2048,
00:16:58.773        "data_size": 63488
00:16:58.773      }
00:16:58.773    ]
00:16:58.773  }'
00:16:58.773   23:50:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:58.773   23:50:29	-- common/autotest_common.sh@10 -- # set +x
00:16:59.340    23:50:29	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:59.340    23:50:29	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:16:59.599  [2024-12-13 23:50:30.152002] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:59.599   23:50:30	-- bdev/bdev_raid.sh@506 -- # '[' db92398c-76b0-4df4-bac0-203952987521 '!=' db92398c-76b0-4df4-bac0-203952987521 ']'
00:16:59.599   23:50:30	-- bdev/bdev_raid.sh@511 -- # killprocess 117742
00:16:59.599   23:50:30	-- common/autotest_common.sh@936 -- # '[' -z 117742 ']'
00:16:59.599   23:50:30	-- common/autotest_common.sh@940 -- # kill -0 117742
00:16:59.599    23:50:30	-- common/autotest_common.sh@941 -- # uname
00:16:59.599   23:50:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:59.599    23:50:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117742
00:16:59.599  killing process with pid 117742
00:16:59.599   23:50:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:59.599   23:50:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:59.599   23:50:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117742'
00:16:59.599   23:50:30	-- common/autotest_common.sh@955 -- # kill 117742
00:16:59.599  [2024-12-13 23:50:30.193513] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:59.599   23:50:30	-- common/autotest_common.sh@960 -- # wait 117742
00:16:59.599  [2024-12-13 23:50:30.193565] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:59.599  [2024-12-13 23:50:30.193623] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:59.599  [2024-12-13 23:50:30.193634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline
00:16:59.857  [2024-12-13 23:50:30.392616] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:00.794  ************************************
00:17:00.794  END TEST raid_superblock_test
00:17:00.794  ************************************
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@513 -- # return 0
00:17:00.794  
00:17:00.794  real	0m18.692s
00:17:00.794  user	0m34.285s
00:17:00.794  sys	0m2.147s
00:17:00.794   23:50:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:00.794   23:50:31	-- common/autotest_common.sh@10 -- # set +x
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false
00:17:00.794   23:50:31	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:00.794   23:50:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:00.794   23:50:31	-- common/autotest_common.sh@10 -- # set +x
00:17:00.794  ************************************
00:17:00.794  START TEST raid_state_function_test
00:17:00.794  ************************************
00:17:00.794   23:50:31	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:00.794    23:50:31	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@226 -- # raid_pid=118348
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118348'
00:17:00.794  Process raid pid: 118348
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:00.794   23:50:31	-- bdev/bdev_raid.sh@228 -- # waitforlisten 118348 /var/tmp/spdk-raid.sock
00:17:00.794   23:50:31	-- common/autotest_common.sh@829 -- # '[' -z 118348 ']'
00:17:00.794   23:50:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:00.794   23:50:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:00.794  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:00.794   23:50:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:00.794   23:50:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:00.794   23:50:31	-- common/autotest_common.sh@10 -- # set +x
00:17:01.053  [2024-12-13 23:50:31.533445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:01.053  [2024-12-13 23:50:31.533656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:01.053  [2024-12-13 23:50:31.705845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:01.312  [2024-12-13 23:50:31.952097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:01.571  [2024-12-13 23:50:32.126415] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:01.830   23:50:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:01.830   23:50:32	-- common/autotest_common.sh@862 -- # return 0
00:17:01.830   23:50:32	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:02.089  [2024-12-13 23:50:32.722916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:02.089  [2024-12-13 23:50:32.723384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:02.089  [2024-12-13 23:50:32.723416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:02.089  [2024-12-13 23:50:32.723555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:02.089  [2024-12-13 23:50:32.723600] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:02.089  [2024-12-13 23:50:32.723750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:02.089  [2024-12-13 23:50:32.723780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:02.089  [2024-12-13 23:50:32.723913] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:02.089   23:50:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:02.089    23:50:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:02.089    23:50:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:02.348   23:50:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:02.348    "name": "Existed_Raid",
00:17:02.348    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.348    "strip_size_kb": 64,
00:17:02.348    "state": "configuring",
00:17:02.348    "raid_level": "raid0",
00:17:02.348    "superblock": false,
00:17:02.348    "num_base_bdevs": 4,
00:17:02.348    "num_base_bdevs_discovered": 0,
00:17:02.348    "num_base_bdevs_operational": 4,
00:17:02.348    "base_bdevs_list": [
00:17:02.348      {
00:17:02.348        "name": "BaseBdev1",
00:17:02.348        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.348        "is_configured": false,
00:17:02.348        "data_offset": 0,
00:17:02.348        "data_size": 0
00:17:02.348      },
00:17:02.348      {
00:17:02.348        "name": "BaseBdev2",
00:17:02.348        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.348        "is_configured": false,
00:17:02.348        "data_offset": 0,
00:17:02.348        "data_size": 0
00:17:02.348      },
00:17:02.348      {
00:17:02.348        "name": "BaseBdev3",
00:17:02.348        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.348        "is_configured": false,
00:17:02.348        "data_offset": 0,
00:17:02.348        "data_size": 0
00:17:02.348      },
00:17:02.348      {
00:17:02.348        "name": "BaseBdev4",
00:17:02.348        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.348        "is_configured": false,
00:17:02.348        "data_offset": 0,
00:17:02.348        "data_size": 0
00:17:02.348      }
00:17:02.348    ]
00:17:02.348  }'
00:17:02.348   23:50:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:02.348   23:50:32	-- common/autotest_common.sh@10 -- # set +x
00:17:02.915   23:50:33	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:03.174  [2024-12-13 23:50:33.850998] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:03.174  [2024-12-13 23:50:33.851033] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:17:03.174   23:50:33	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:03.433  [2024-12-13 23:50:34.035056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:03.433  [2024-12-13 23:50:34.035404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:03.433  [2024-12-13 23:50:34.035435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:03.433  [2024-12-13 23:50:34.035599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:03.433  [2024-12-13 23:50:34.035618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:03.433  [2024-12-13 23:50:34.035767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:03.433  [2024-12-13 23:50:34.035795] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:03.433  [2024-12-13 23:50:34.035925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:03.433   23:50:34	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:03.692  [2024-12-13 23:50:34.250080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:03.692  BaseBdev1
00:17:03.692   23:50:34	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:03.692   23:50:34	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:03.692   23:50:34	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:03.692   23:50:34	-- common/autotest_common.sh@899 -- # local i
00:17:03.692   23:50:34	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:03.692   23:50:34	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:03.692   23:50:34	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:03.951   23:50:34	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:04.209  [
00:17:04.209    {
00:17:04.209      "name": "BaseBdev1",
00:17:04.209      "aliases": [
00:17:04.209        "ec552dde-c9fc-432b-8a67-1c9df20c3efe"
00:17:04.209      ],
00:17:04.209      "product_name": "Malloc disk",
00:17:04.209      "block_size": 512,
00:17:04.209      "num_blocks": 65536,
00:17:04.210      "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:04.210      "assigned_rate_limits": {
00:17:04.210        "rw_ios_per_sec": 0,
00:17:04.210        "rw_mbytes_per_sec": 0,
00:17:04.210        "r_mbytes_per_sec": 0,
00:17:04.210        "w_mbytes_per_sec": 0
00:17:04.210      },
00:17:04.210      "claimed": true,
00:17:04.210      "claim_type": "exclusive_write",
00:17:04.210      "zoned": false,
00:17:04.210      "supported_io_types": {
00:17:04.210        "read": true,
00:17:04.210        "write": true,
00:17:04.210        "unmap": true,
00:17:04.210        "write_zeroes": true,
00:17:04.210        "flush": true,
00:17:04.210        "reset": true,
00:17:04.210        "compare": false,
00:17:04.210        "compare_and_write": false,
00:17:04.210        "abort": true,
00:17:04.210        "nvme_admin": false,
00:17:04.210        "nvme_io": false
00:17:04.210      },
00:17:04.210      "memory_domains": [
00:17:04.210        {
00:17:04.210          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:04.210          "dma_device_type": 2
00:17:04.210        }
00:17:04.210      ],
00:17:04.210      "driver_specific": {}
00:17:04.210    }
00:17:04.210  ]
00:17:04.210   23:50:34	-- common/autotest_common.sh@905 -- # return 0
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:04.210   23:50:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:04.210    23:50:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:04.210    23:50:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:04.472   23:50:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:04.472    "name": "Existed_Raid",
00:17:04.472    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.472    "strip_size_kb": 64,
00:17:04.472    "state": "configuring",
00:17:04.472    "raid_level": "raid0",
00:17:04.472    "superblock": false,
00:17:04.472    "num_base_bdevs": 4,
00:17:04.472    "num_base_bdevs_discovered": 1,
00:17:04.472    "num_base_bdevs_operational": 4,
00:17:04.472    "base_bdevs_list": [
00:17:04.472      {
00:17:04.472        "name": "BaseBdev1",
00:17:04.472        "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:04.472        "is_configured": true,
00:17:04.472        "data_offset": 0,
00:17:04.472        "data_size": 65536
00:17:04.472      },
00:17:04.472      {
00:17:04.472        "name": "BaseBdev2",
00:17:04.472        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.472        "is_configured": false,
00:17:04.472        "data_offset": 0,
00:17:04.472        "data_size": 0
00:17:04.472      },
00:17:04.472      {
00:17:04.472        "name": "BaseBdev3",
00:17:04.472        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.472        "is_configured": false,
00:17:04.472        "data_offset": 0,
00:17:04.472        "data_size": 0
00:17:04.472      },
00:17:04.472      {
00:17:04.472        "name": "BaseBdev4",
00:17:04.472        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.472        "is_configured": false,
00:17:04.472        "data_offset": 0,
00:17:04.472        "data_size": 0
00:17:04.472      }
00:17:04.472    ]
00:17:04.472  }'
00:17:04.472   23:50:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:04.472   23:50:34	-- common/autotest_common.sh@10 -- # set +x
00:17:05.040   23:50:35	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:05.298  [2024-12-13 23:50:35.862417] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:05.298  [2024-12-13 23:50:35.862473] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:17:05.298   23:50:35	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:17:05.298   23:50:35	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:05.556  [2024-12-13 23:50:36.058504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:05.556  [2024-12-13 23:50:36.060416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:05.556  [2024-12-13 23:50:36.060878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:05.556  [2024-12-13 23:50:36.060911] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:05.556  [2024-12-13 23:50:36.061051] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:05.556  [2024-12-13 23:50:36.061070] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:05.556  [2024-12-13 23:50:36.061191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:05.556   23:50:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:05.556    23:50:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:05.556    23:50:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:05.814   23:50:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:05.814    "name": "Existed_Raid",
00:17:05.814    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.814    "strip_size_kb": 64,
00:17:05.814    "state": "configuring",
00:17:05.814    "raid_level": "raid0",
00:17:05.814    "superblock": false,
00:17:05.814    "num_base_bdevs": 4,
00:17:05.814    "num_base_bdevs_discovered": 1,
00:17:05.814    "num_base_bdevs_operational": 4,
00:17:05.814    "base_bdevs_list": [
00:17:05.814      {
00:17:05.814        "name": "BaseBdev1",
00:17:05.814        "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:05.814        "is_configured": true,
00:17:05.814        "data_offset": 0,
00:17:05.814        "data_size": 65536
00:17:05.814      },
00:17:05.814      {
00:17:05.814        "name": "BaseBdev2",
00:17:05.814        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.814        "is_configured": false,
00:17:05.814        "data_offset": 0,
00:17:05.814        "data_size": 0
00:17:05.814      },
00:17:05.814      {
00:17:05.814        "name": "BaseBdev3",
00:17:05.814        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.814        "is_configured": false,
00:17:05.814        "data_offset": 0,
00:17:05.814        "data_size": 0
00:17:05.814      },
00:17:05.814      {
00:17:05.814        "name": "BaseBdev4",
00:17:05.814        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.814        "is_configured": false,
00:17:05.814        "data_offset": 0,
00:17:05.814        "data_size": 0
00:17:05.814      }
00:17:05.814    ]
00:17:05.814  }'
00:17:05.814   23:50:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:05.814   23:50:36	-- common/autotest_common.sh@10 -- # set +x
00:17:06.381   23:50:36	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:06.643  [2024-12-13 23:50:37.207209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:06.643  BaseBdev2
00:17:06.643   23:50:37	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:06.643   23:50:37	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:06.643   23:50:37	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:06.643   23:50:37	-- common/autotest_common.sh@899 -- # local i
00:17:06.643   23:50:37	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:06.643   23:50:37	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:06.643   23:50:37	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:06.903   23:50:37	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:07.161  [
00:17:07.161    {
00:17:07.161      "name": "BaseBdev2",
00:17:07.161      "aliases": [
00:17:07.161        "f178b4da-e571-4063-b407-c43cddf736e9"
00:17:07.161      ],
00:17:07.161      "product_name": "Malloc disk",
00:17:07.161      "block_size": 512,
00:17:07.161      "num_blocks": 65536,
00:17:07.161      "uuid": "f178b4da-e571-4063-b407-c43cddf736e9",
00:17:07.161      "assigned_rate_limits": {
00:17:07.161        "rw_ios_per_sec": 0,
00:17:07.161        "rw_mbytes_per_sec": 0,
00:17:07.161        "r_mbytes_per_sec": 0,
00:17:07.161        "w_mbytes_per_sec": 0
00:17:07.161      },
00:17:07.161      "claimed": true,
00:17:07.161      "claim_type": "exclusive_write",
00:17:07.161      "zoned": false,
00:17:07.161      "supported_io_types": {
00:17:07.161        "read": true,
00:17:07.161        "write": true,
00:17:07.161        "unmap": true,
00:17:07.162        "write_zeroes": true,
00:17:07.162        "flush": true,
00:17:07.162        "reset": true,
00:17:07.162        "compare": false,
00:17:07.162        "compare_and_write": false,
00:17:07.162        "abort": true,
00:17:07.162        "nvme_admin": false,
00:17:07.162        "nvme_io": false
00:17:07.162      },
00:17:07.162      "memory_domains": [
00:17:07.162        {
00:17:07.162          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:07.162          "dma_device_type": 2
00:17:07.162        }
00:17:07.162      ],
00:17:07.162      "driver_specific": {}
00:17:07.162    }
00:17:07.162  ]
00:17:07.162   23:50:37	-- common/autotest_common.sh@905 -- # return 0
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:07.162   23:50:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:07.162    23:50:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:07.162    23:50:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:07.420   23:50:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:07.420    "name": "Existed_Raid",
00:17:07.420    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:07.420    "strip_size_kb": 64,
00:17:07.420    "state": "configuring",
00:17:07.420    "raid_level": "raid0",
00:17:07.420    "superblock": false,
00:17:07.420    "num_base_bdevs": 4,
00:17:07.420    "num_base_bdevs_discovered": 2,
00:17:07.420    "num_base_bdevs_operational": 4,
00:17:07.420    "base_bdevs_list": [
00:17:07.420      {
00:17:07.420        "name": "BaseBdev1",
00:17:07.420        "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:07.420        "is_configured": true,
00:17:07.420        "data_offset": 0,
00:17:07.420        "data_size": 65536
00:17:07.420      },
00:17:07.420      {
00:17:07.420        "name": "BaseBdev2",
00:17:07.420        "uuid": "f178b4da-e571-4063-b407-c43cddf736e9",
00:17:07.420        "is_configured": true,
00:17:07.420        "data_offset": 0,
00:17:07.420        "data_size": 65536
00:17:07.420      },
00:17:07.420      {
00:17:07.420        "name": "BaseBdev3",
00:17:07.420        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:07.420        "is_configured": false,
00:17:07.420        "data_offset": 0,
00:17:07.420        "data_size": 0
00:17:07.420      },
00:17:07.420      {
00:17:07.420        "name": "BaseBdev4",
00:17:07.420        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:07.420        "is_configured": false,
00:17:07.420        "data_offset": 0,
00:17:07.420        "data_size": 0
00:17:07.420      }
00:17:07.420    ]
00:17:07.420  }'
00:17:07.420   23:50:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:07.420   23:50:37	-- common/autotest_common.sh@10 -- # set +x
00:17:08.034   23:50:38	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:08.305  [2024-12-13 23:50:38.765977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:08.306  BaseBdev3
00:17:08.306   23:50:38	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:08.306   23:50:38	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:08.306   23:50:38	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:08.306   23:50:38	-- common/autotest_common.sh@899 -- # local i
00:17:08.306   23:50:38	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:08.306   23:50:38	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:08.306   23:50:38	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:08.306   23:50:38	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:08.565  [
00:17:08.565    {
00:17:08.565      "name": "BaseBdev3",
00:17:08.565      "aliases": [
00:17:08.565        "c9ae53f1-a37d-4851-85bf-66ca9c17b4a7"
00:17:08.565      ],
00:17:08.565      "product_name": "Malloc disk",
00:17:08.565      "block_size": 512,
00:17:08.565      "num_blocks": 65536,
00:17:08.565      "uuid": "c9ae53f1-a37d-4851-85bf-66ca9c17b4a7",
00:17:08.565      "assigned_rate_limits": {
00:17:08.565        "rw_ios_per_sec": 0,
00:17:08.565        "rw_mbytes_per_sec": 0,
00:17:08.565        "r_mbytes_per_sec": 0,
00:17:08.565        "w_mbytes_per_sec": 0
00:17:08.565      },
00:17:08.565      "claimed": true,
00:17:08.565      "claim_type": "exclusive_write",
00:17:08.565      "zoned": false,
00:17:08.565      "supported_io_types": {
00:17:08.565        "read": true,
00:17:08.565        "write": true,
00:17:08.565        "unmap": true,
00:17:08.565        "write_zeroes": true,
00:17:08.565        "flush": true,
00:17:08.565        "reset": true,
00:17:08.565        "compare": false,
00:17:08.565        "compare_and_write": false,
00:17:08.565        "abort": true,
00:17:08.565        "nvme_admin": false,
00:17:08.565        "nvme_io": false
00:17:08.565      },
00:17:08.565      "memory_domains": [
00:17:08.565        {
00:17:08.565          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:08.565          "dma_device_type": 2
00:17:08.565        }
00:17:08.565      ],
00:17:08.565      "driver_specific": {}
00:17:08.565    }
00:17:08.565  ]
00:17:08.565   23:50:39	-- common/autotest_common.sh@905 -- # return 0
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:08.565   23:50:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:08.565    23:50:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:08.565    23:50:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:08.824   23:50:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:08.824    "name": "Existed_Raid",
00:17:08.824    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.824    "strip_size_kb": 64,
00:17:08.824    "state": "configuring",
00:17:08.824    "raid_level": "raid0",
00:17:08.824    "superblock": false,
00:17:08.824    "num_base_bdevs": 4,
00:17:08.824    "num_base_bdevs_discovered": 3,
00:17:08.824    "num_base_bdevs_operational": 4,
00:17:08.824    "base_bdevs_list": [
00:17:08.824      {
00:17:08.824        "name": "BaseBdev1",
00:17:08.824        "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:08.824        "is_configured": true,
00:17:08.824        "data_offset": 0,
00:17:08.824        "data_size": 65536
00:17:08.824      },
00:17:08.824      {
00:17:08.824        "name": "BaseBdev2",
00:17:08.824        "uuid": "f178b4da-e571-4063-b407-c43cddf736e9",
00:17:08.824        "is_configured": true,
00:17:08.824        "data_offset": 0,
00:17:08.824        "data_size": 65536
00:17:08.824      },
00:17:08.824      {
00:17:08.824        "name": "BaseBdev3",
00:17:08.824        "uuid": "c9ae53f1-a37d-4851-85bf-66ca9c17b4a7",
00:17:08.824        "is_configured": true,
00:17:08.824        "data_offset": 0,
00:17:08.824        "data_size": 65536
00:17:08.824      },
00:17:08.824      {
00:17:08.824        "name": "BaseBdev4",
00:17:08.824        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.824        "is_configured": false,
00:17:08.824        "data_offset": 0,
00:17:08.824        "data_size": 0
00:17:08.824      }
00:17:08.824    ]
00:17:08.824  }'
00:17:08.824   23:50:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:08.824   23:50:39	-- common/autotest_common.sh@10 -- # set +x
00:17:09.391   23:50:40	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:09.650  [2024-12-13 23:50:40.301688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:09.650  [2024-12-13 23:50:40.301740] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:17:09.650  [2024-12-13 23:50:40.301749] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:17:09.650  [2024-12-13 23:50:40.301892] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:17:09.650  [2024-12-13 23:50:40.302239] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:17:09.650  [2024-12-13 23:50:40.302260] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:17:09.650  [2024-12-13 23:50:40.302510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:09.650  BaseBdev4
00:17:09.650   23:50:40	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:09.650   23:50:40	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:09.650   23:50:40	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:09.650   23:50:40	-- common/autotest_common.sh@899 -- # local i
00:17:09.650   23:50:40	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:09.650   23:50:40	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:09.650   23:50:40	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:09.908   23:50:40	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:10.166  [
00:17:10.166    {
00:17:10.166      "name": "BaseBdev4",
00:17:10.166      "aliases": [
00:17:10.166        "97a52bb7-d9e0-42b5-b9de-b5f7125126e3"
00:17:10.166      ],
00:17:10.166      "product_name": "Malloc disk",
00:17:10.166      "block_size": 512,
00:17:10.166      "num_blocks": 65536,
00:17:10.166      "uuid": "97a52bb7-d9e0-42b5-b9de-b5f7125126e3",
00:17:10.166      "assigned_rate_limits": {
00:17:10.166        "rw_ios_per_sec": 0,
00:17:10.166        "rw_mbytes_per_sec": 0,
00:17:10.166        "r_mbytes_per_sec": 0,
00:17:10.166        "w_mbytes_per_sec": 0
00:17:10.166      },
00:17:10.166      "claimed": true,
00:17:10.166      "claim_type": "exclusive_write",
00:17:10.166      "zoned": false,
00:17:10.166      "supported_io_types": {
00:17:10.166        "read": true,
00:17:10.166        "write": true,
00:17:10.166        "unmap": true,
00:17:10.166        "write_zeroes": true,
00:17:10.166        "flush": true,
00:17:10.166        "reset": true,
00:17:10.166        "compare": false,
00:17:10.166        "compare_and_write": false,
00:17:10.166        "abort": true,
00:17:10.166        "nvme_admin": false,
00:17:10.166        "nvme_io": false
00:17:10.166      },
00:17:10.166      "memory_domains": [
00:17:10.166        {
00:17:10.166          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:10.166          "dma_device_type": 2
00:17:10.166        }
00:17:10.166      ],
00:17:10.166      "driver_specific": {}
00:17:10.167    }
00:17:10.167  ]
00:17:10.167   23:50:40	-- common/autotest_common.sh@905 -- # return 0
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:10.167   23:50:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:10.167    23:50:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:10.167    23:50:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:10.425   23:50:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:10.425    "name": "Existed_Raid",
00:17:10.425    "uuid": "483ad096-b18e-4cf9-9d59-f56910785efa",
00:17:10.425    "strip_size_kb": 64,
00:17:10.425    "state": "online",
00:17:10.425    "raid_level": "raid0",
00:17:10.425    "superblock": false,
00:17:10.425    "num_base_bdevs": 4,
00:17:10.425    "num_base_bdevs_discovered": 4,
00:17:10.425    "num_base_bdevs_operational": 4,
00:17:10.425    "base_bdevs_list": [
00:17:10.425      {
00:17:10.425        "name": "BaseBdev1",
00:17:10.425        "uuid": "ec552dde-c9fc-432b-8a67-1c9df20c3efe",
00:17:10.425        "is_configured": true,
00:17:10.425        "data_offset": 0,
00:17:10.425        "data_size": 65536
00:17:10.425      },
00:17:10.425      {
00:17:10.425        "name": "BaseBdev2",
00:17:10.425        "uuid": "f178b4da-e571-4063-b407-c43cddf736e9",
00:17:10.425        "is_configured": true,
00:17:10.425        "data_offset": 0,
00:17:10.425        "data_size": 65536
00:17:10.425      },
00:17:10.425      {
00:17:10.425        "name": "BaseBdev3",
00:17:10.425        "uuid": "c9ae53f1-a37d-4851-85bf-66ca9c17b4a7",
00:17:10.425        "is_configured": true,
00:17:10.425        "data_offset": 0,
00:17:10.425        "data_size": 65536
00:17:10.425      },
00:17:10.425      {
00:17:10.425        "name": "BaseBdev4",
00:17:10.425        "uuid": "97a52bb7-d9e0-42b5-b9de-b5f7125126e3",
00:17:10.425        "is_configured": true,
00:17:10.425        "data_offset": 0,
00:17:10.425        "data_size": 65536
00:17:10.425      }
00:17:10.425    ]
00:17:10.425  }'
00:17:10.425   23:50:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:10.425   23:50:40	-- common/autotest_common.sh@10 -- # set +x
00:17:10.993   23:50:41	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:11.252  [2024-12-13 23:50:41.802721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:11.252  [2024-12-13 23:50:41.802754] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:11.252  [2024-12-13 23:50:41.802816] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:11.252   23:50:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:11.252    23:50:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:11.252    23:50:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:11.511   23:50:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:11.511    "name": "Existed_Raid",
00:17:11.511    "uuid": "483ad096-b18e-4cf9-9d59-f56910785efa",
00:17:11.511    "strip_size_kb": 64,
00:17:11.511    "state": "offline",
00:17:11.511    "raid_level": "raid0",
00:17:11.511    "superblock": false,
00:17:11.511    "num_base_bdevs": 4,
00:17:11.511    "num_base_bdevs_discovered": 3,
00:17:11.511    "num_base_bdevs_operational": 3,
00:17:11.511    "base_bdevs_list": [
00:17:11.511      {
00:17:11.511        "name": null,
00:17:11.511        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:11.511        "is_configured": false,
00:17:11.511        "data_offset": 0,
00:17:11.511        "data_size": 65536
00:17:11.511      },
00:17:11.511      {
00:17:11.511        "name": "BaseBdev2",
00:17:11.511        "uuid": "f178b4da-e571-4063-b407-c43cddf736e9",
00:17:11.511        "is_configured": true,
00:17:11.511        "data_offset": 0,
00:17:11.511        "data_size": 65536
00:17:11.511      },
00:17:11.511      {
00:17:11.511        "name": "BaseBdev3",
00:17:11.511        "uuid": "c9ae53f1-a37d-4851-85bf-66ca9c17b4a7",
00:17:11.511        "is_configured": true,
00:17:11.511        "data_offset": 0,
00:17:11.511        "data_size": 65536
00:17:11.511      },
00:17:11.511      {
00:17:11.511        "name": "BaseBdev4",
00:17:11.511        "uuid": "97a52bb7-d9e0-42b5-b9de-b5f7125126e3",
00:17:11.511        "is_configured": true,
00:17:11.511        "data_offset": 0,
00:17:11.511        "data_size": 65536
00:17:11.511      }
00:17:11.511    ]
00:17:11.511  }'
00:17:11.511   23:50:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:11.511   23:50:42	-- common/autotest_common.sh@10 -- # set +x
00:17:12.079   23:50:42	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:12.079   23:50:42	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:12.079    23:50:42	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:12.079    23:50:42	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:12.338   23:50:42	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:12.338   23:50:42	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:12.338   23:50:42	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:12.338  [2024-12-13 23:50:43.042216] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:12.597   23:50:43	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:12.597   23:50:43	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:12.597    23:50:43	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:12.597    23:50:43	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:12.855   23:50:43	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:12.855   23:50:43	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:12.855   23:50:43	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:13.115  [2024-12-13 23:50:43.611356] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:13.115   23:50:43	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:13.115   23:50:43	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:13.115    23:50:43	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:13.115    23:50:43	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:13.374   23:50:43	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:13.374   23:50:43	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:13.374   23:50:43	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:13.633  [2024-12-13 23:50:44.154910] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:13.633  [2024-12-13 23:50:44.154964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:17:13.633   23:50:44	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:13.633   23:50:44	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:13.633    23:50:44	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:13.633    23:50:44	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:13.892   23:50:44	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:13.892   23:50:44	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:13.892   23:50:44	-- bdev/bdev_raid.sh@287 -- # killprocess 118348
00:17:13.892   23:50:44	-- common/autotest_common.sh@936 -- # '[' -z 118348 ']'
00:17:13.892   23:50:44	-- common/autotest_common.sh@940 -- # kill -0 118348
00:17:13.892    23:50:44	-- common/autotest_common.sh@941 -- # uname
00:17:13.892   23:50:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:13.892    23:50:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118348
00:17:13.892  killing process with pid 118348
00:17:13.892   23:50:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:13.892   23:50:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:13.892   23:50:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 118348'
00:17:13.892   23:50:44	-- common/autotest_common.sh@955 -- # kill 118348
00:17:13.892   23:50:44	-- common/autotest_common.sh@960 -- # wait 118348
00:17:13.892  [2024-12-13 23:50:44.452432] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:13.892  [2024-12-13 23:50:44.452531] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:14.829  ************************************
00:17:14.829  END TEST raid_state_function_test
00:17:14.829  ************************************
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:14.829  
00:17:14.829  real	0m13.902s
00:17:14.829  user	0m24.758s
00:17:14.829  sys	0m1.727s
00:17:14.829   23:50:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:14.829   23:50:45	-- common/autotest_common.sh@10 -- # set +x
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true
00:17:14.829   23:50:45	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:14.829   23:50:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:14.829   23:50:45	-- common/autotest_common.sh@10 -- # set +x
00:17:14.829  ************************************
00:17:14.829  START TEST raid_state_function_test_sb
00:17:14.829  ************************************
00:17:14.829   23:50:45	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:14.829    23:50:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:17:14.829   23:50:45	-- bdev/bdev_raid.sh@226 -- # raid_pid=118786
00:17:14.830  Process raid pid: 118786
00:17:14.830   23:50:45	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118786'
00:17:14.830   23:50:45	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:14.830   23:50:45	-- bdev/bdev_raid.sh@228 -- # waitforlisten 118786 /var/tmp/spdk-raid.sock
00:17:14.830   23:50:45	-- common/autotest_common.sh@829 -- # '[' -z 118786 ']'
00:17:14.830   23:50:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:14.830   23:50:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:14.830  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:14.830   23:50:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:14.830   23:50:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:14.830   23:50:45	-- common/autotest_common.sh@10 -- # set +x
00:17:14.830  [2024-12-13 23:50:45.499859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:14.830  [2024-12-13 23:50:45.500042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:15.089  [2024-12-13 23:50:45.667703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:15.348  [2024-12-13 23:50:45.839193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:15.348  [2024-12-13 23:50:46.006600] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:15.915   23:50:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:15.915   23:50:46	-- common/autotest_common.sh@862 -- # return 0
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:15.915  [2024-12-13 23:50:46.615639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:15.915  [2024-12-13 23:50:46.616103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:15.915  [2024-12-13 23:50:46.616131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:15.915  [2024-12-13 23:50:46.616277] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:15.915  [2024-12-13 23:50:46.616316] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:15.915  [2024-12-13 23:50:46.616461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:15.915  [2024-12-13 23:50:46.616486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:15.915  [2024-12-13 23:50:46.616625] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:15.915   23:50:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:15.915    23:50:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:15.915    23:50:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:16.482   23:50:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:16.482    "name": "Existed_Raid",
00:17:16.482    "uuid": "7020a67c-108b-4827-a29f-ba72b2d08ab2",
00:17:16.482    "strip_size_kb": 64,
00:17:16.482    "state": "configuring",
00:17:16.482    "raid_level": "raid0",
00:17:16.482    "superblock": true,
00:17:16.482    "num_base_bdevs": 4,
00:17:16.482    "num_base_bdevs_discovered": 0,
00:17:16.482    "num_base_bdevs_operational": 4,
00:17:16.482    "base_bdevs_list": [
00:17:16.482      {
00:17:16.482        "name": "BaseBdev1",
00:17:16.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:16.482        "is_configured": false,
00:17:16.482        "data_offset": 0,
00:17:16.482        "data_size": 0
00:17:16.482      },
00:17:16.482      {
00:17:16.482        "name": "BaseBdev2",
00:17:16.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:16.482        "is_configured": false,
00:17:16.482        "data_offset": 0,
00:17:16.482        "data_size": 0
00:17:16.482      },
00:17:16.482      {
00:17:16.482        "name": "BaseBdev3",
00:17:16.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:16.482        "is_configured": false,
00:17:16.482        "data_offset": 0,
00:17:16.482        "data_size": 0
00:17:16.482      },
00:17:16.482      {
00:17:16.482        "name": "BaseBdev4",
00:17:16.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:16.482        "is_configured": false,
00:17:16.482        "data_offset": 0,
00:17:16.482        "data_size": 0
00:17:16.482      }
00:17:16.482    ]
00:17:16.482  }'
00:17:16.482   23:50:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:16.482   23:50:46	-- common/autotest_common.sh@10 -- # set +x
00:17:17.051   23:50:47	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:17.051  [2024-12-13 23:50:47.763684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:17.051  [2024-12-13 23:50:47.763716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:17:17.051   23:50:47	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:17.386  [2024-12-13 23:50:47.935798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:17.386  [2024-12-13 23:50:47.936156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:17.386  [2024-12-13 23:50:47.936184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:17.386  [2024-12-13 23:50:47.936309] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:17.386  [2024-12-13 23:50:47.936325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:17.386  [2024-12-13 23:50:47.936452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:17.386  [2024-12-13 23:50:47.936476] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:17.386  [2024-12-13 23:50:47.936613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:17.386   23:50:47	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:17.649  [2024-12-13 23:50:48.140511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:17.649  BaseBdev1
00:17:17.649   23:50:48	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:17.649   23:50:48	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:17.649   23:50:48	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:17.649   23:50:48	-- common/autotest_common.sh@899 -- # local i
00:17:17.649   23:50:48	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:17.649   23:50:48	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:17.649   23:50:48	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:17.649   23:50:48	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:17.908  [
00:17:17.908    {
00:17:17.908      "name": "BaseBdev1",
00:17:17.908      "aliases": [
00:17:17.908        "0e42129a-0364-4626-9cdd-9de78add5d69"
00:17:17.908      ],
00:17:17.908      "product_name": "Malloc disk",
00:17:17.908      "block_size": 512,
00:17:17.908      "num_blocks": 65536,
00:17:17.908      "uuid": "0e42129a-0364-4626-9cdd-9de78add5d69",
00:17:17.908      "assigned_rate_limits": {
00:17:17.908        "rw_ios_per_sec": 0,
00:17:17.908        "rw_mbytes_per_sec": 0,
00:17:17.908        "r_mbytes_per_sec": 0,
00:17:17.908        "w_mbytes_per_sec": 0
00:17:17.908      },
00:17:17.908      "claimed": true,
00:17:17.908      "claim_type": "exclusive_write",
00:17:17.908      "zoned": false,
00:17:17.908      "supported_io_types": {
00:17:17.908        "read": true,
00:17:17.908        "write": true,
00:17:17.908        "unmap": true,
00:17:17.908        "write_zeroes": true,
00:17:17.908        "flush": true,
00:17:17.908        "reset": true,
00:17:17.908        "compare": false,
00:17:17.908        "compare_and_write": false,
00:17:17.908        "abort": true,
00:17:17.908        "nvme_admin": false,
00:17:17.908        "nvme_io": false
00:17:17.908      },
00:17:17.908      "memory_domains": [
00:17:17.908        {
00:17:17.908          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:17.908          "dma_device_type": 2
00:17:17.908        }
00:17:17.908      ],
00:17:17.908      "driver_specific": {}
00:17:17.908    }
00:17:17.908  ]
00:17:17.908   23:50:48	-- common/autotest_common.sh@905 -- # return 0
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:17.908   23:50:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:17.908    23:50:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:17.908    23:50:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:18.167   23:50:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:18.167    "name": "Existed_Raid",
00:17:18.167    "uuid": "727e2923-9160-4823-bdd9-d210e622370b",
00:17:18.167    "strip_size_kb": 64,
00:17:18.167    "state": "configuring",
00:17:18.167    "raid_level": "raid0",
00:17:18.167    "superblock": true,
00:17:18.167    "num_base_bdevs": 4,
00:17:18.167    "num_base_bdevs_discovered": 1,
00:17:18.167    "num_base_bdevs_operational": 4,
00:17:18.167    "base_bdevs_list": [
00:17:18.167      {
00:17:18.167        "name": "BaseBdev1",
00:17:18.167        "uuid": "0e42129a-0364-4626-9cdd-9de78add5d69",
00:17:18.167        "is_configured": true,
00:17:18.167        "data_offset": 2048,
00:17:18.167        "data_size": 63488
00:17:18.167      },
00:17:18.167      {
00:17:18.167        "name": "BaseBdev2",
00:17:18.167        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.167        "is_configured": false,
00:17:18.167        "data_offset": 0,
00:17:18.167        "data_size": 0
00:17:18.167      },
00:17:18.167      {
00:17:18.167        "name": "BaseBdev3",
00:17:18.167        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.167        "is_configured": false,
00:17:18.167        "data_offset": 0,
00:17:18.167        "data_size": 0
00:17:18.167      },
00:17:18.167      {
00:17:18.167        "name": "BaseBdev4",
00:17:18.167        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.167        "is_configured": false,
00:17:18.167        "data_offset": 0,
00:17:18.167        "data_size": 0
00:17:18.167      }
00:17:18.167    ]
00:17:18.167  }'
00:17:18.167   23:50:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:18.167   23:50:48	-- common/autotest_common.sh@10 -- # set +x
00:17:18.735   23:50:49	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:18.994  [2024-12-13 23:50:49.568809] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:18.994  [2024-12-13 23:50:49.568855] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:17:18.994   23:50:49	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:17:18.994   23:50:49	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:19.252   23:50:49	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:19.511  BaseBdev1
00:17:19.511   23:50:50	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:17:19.511   23:50:50	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:19.511   23:50:50	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:19.511   23:50:50	-- common/autotest_common.sh@899 -- # local i
00:17:19.511   23:50:50	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:19.511   23:50:50	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:19.511   23:50:50	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:19.770   23:50:50	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:19.770  [
00:17:19.770    {
00:17:19.770      "name": "BaseBdev1",
00:17:19.770      "aliases": [
00:17:19.770        "790f72ad-c018-4211-a02f-5046df45e567"
00:17:19.770      ],
00:17:19.770      "product_name": "Malloc disk",
00:17:19.770      "block_size": 512,
00:17:19.770      "num_blocks": 65536,
00:17:19.770      "uuid": "790f72ad-c018-4211-a02f-5046df45e567",
00:17:19.770      "assigned_rate_limits": {
00:17:19.770        "rw_ios_per_sec": 0,
00:17:19.770        "rw_mbytes_per_sec": 0,
00:17:19.770        "r_mbytes_per_sec": 0,
00:17:19.770        "w_mbytes_per_sec": 0
00:17:19.770      },
00:17:19.770      "claimed": false,
00:17:19.770      "zoned": false,
00:17:19.770      "supported_io_types": {
00:17:19.770        "read": true,
00:17:19.770        "write": true,
00:17:19.770        "unmap": true,
00:17:19.770        "write_zeroes": true,
00:17:19.770        "flush": true,
00:17:19.770        "reset": true,
00:17:19.770        "compare": false,
00:17:19.770        "compare_and_write": false,
00:17:19.770        "abort": true,
00:17:19.770        "nvme_admin": false,
00:17:19.770        "nvme_io": false
00:17:19.770      },
00:17:19.770      "memory_domains": [
00:17:19.770        {
00:17:19.770          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:19.770          "dma_device_type": 2
00:17:19.770        }
00:17:19.770      ],
00:17:19.770      "driver_specific": {}
00:17:19.770    }
00:17:19.770  ]
00:17:19.770   23:50:50	-- common/autotest_common.sh@905 -- # return 0
00:17:19.770   23:50:50	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:20.029  [2024-12-13 23:50:50.627166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:20.029  [2024-12-13 23:50:50.628941] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:20.029  [2024-12-13 23:50:50.629387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:20.029  [2024-12-13 23:50:50.629414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:20.029  [2024-12-13 23:50:50.629534] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:20.029  [2024-12-13 23:50:50.629550] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:20.029  [2024-12-13 23:50:50.629692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:20.029   23:50:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:20.029    23:50:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:20.029    23:50:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:20.288   23:50:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:20.288    "name": "Existed_Raid",
00:17:20.288    "uuid": "2bc6350e-d4d8-4d70-af9d-b3354f7ffd41",
00:17:20.288    "strip_size_kb": 64,
00:17:20.288    "state": "configuring",
00:17:20.288    "raid_level": "raid0",
00:17:20.288    "superblock": true,
00:17:20.288    "num_base_bdevs": 4,
00:17:20.288    "num_base_bdevs_discovered": 1,
00:17:20.288    "num_base_bdevs_operational": 4,
00:17:20.288    "base_bdevs_list": [
00:17:20.288      {
00:17:20.288        "name": "BaseBdev1",
00:17:20.288        "uuid": "790f72ad-c018-4211-a02f-5046df45e567",
00:17:20.288        "is_configured": true,
00:17:20.288        "data_offset": 2048,
00:17:20.288        "data_size": 63488
00:17:20.288      },
00:17:20.288      {
00:17:20.288        "name": "BaseBdev2",
00:17:20.288        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.288        "is_configured": false,
00:17:20.288        "data_offset": 0,
00:17:20.288        "data_size": 0
00:17:20.288      },
00:17:20.288      {
00:17:20.288        "name": "BaseBdev3",
00:17:20.288        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.288        "is_configured": false,
00:17:20.288        "data_offset": 0,
00:17:20.288        "data_size": 0
00:17:20.288      },
00:17:20.288      {
00:17:20.288        "name": "BaseBdev4",
00:17:20.288        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.288        "is_configured": false,
00:17:20.288        "data_offset": 0,
00:17:20.288        "data_size": 0
00:17:20.288      }
00:17:20.288    ]
00:17:20.288  }'
00:17:20.288   23:50:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:20.288   23:50:50	-- common/autotest_common.sh@10 -- # set +x
00:17:20.855   23:50:51	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:21.114  [2024-12-13 23:50:51.758740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:21.114  BaseBdev2
00:17:21.114   23:50:51	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:21.114   23:50:51	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:21.114   23:50:51	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:21.114   23:50:51	-- common/autotest_common.sh@899 -- # local i
00:17:21.114   23:50:51	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:21.114   23:50:51	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:21.114   23:50:51	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:21.373   23:50:51	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:21.632  [
00:17:21.632    {
00:17:21.632      "name": "BaseBdev2",
00:17:21.632      "aliases": [
00:17:21.632        "6879a3ab-cfec-434d-94a8-39589a0c9d62"
00:17:21.632      ],
00:17:21.632      "product_name": "Malloc disk",
00:17:21.632      "block_size": 512,
00:17:21.632      "num_blocks": 65536,
00:17:21.632      "uuid": "6879a3ab-cfec-434d-94a8-39589a0c9d62",
00:17:21.632      "assigned_rate_limits": {
00:17:21.632        "rw_ios_per_sec": 0,
00:17:21.632        "rw_mbytes_per_sec": 0,
00:17:21.632        "r_mbytes_per_sec": 0,
00:17:21.632        "w_mbytes_per_sec": 0
00:17:21.632      },
00:17:21.632      "claimed": true,
00:17:21.632      "claim_type": "exclusive_write",
00:17:21.632      "zoned": false,
00:17:21.632      "supported_io_types": {
00:17:21.632        "read": true,
00:17:21.632        "write": true,
00:17:21.632        "unmap": true,
00:17:21.632        "write_zeroes": true,
00:17:21.632        "flush": true,
00:17:21.632        "reset": true,
00:17:21.632        "compare": false,
00:17:21.632        "compare_and_write": false,
00:17:21.632        "abort": true,
00:17:21.632        "nvme_admin": false,
00:17:21.632        "nvme_io": false
00:17:21.632      },
00:17:21.632      "memory_domains": [
00:17:21.632        {
00:17:21.632          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:21.632          "dma_device_type": 2
00:17:21.632        }
00:17:21.632      ],
00:17:21.632      "driver_specific": {}
00:17:21.632    }
00:17:21.632  ]
00:17:21.632   23:50:52	-- common/autotest_common.sh@905 -- # return 0
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:21.632   23:50:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:21.632    23:50:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:21.632    23:50:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:21.891   23:50:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:21.891    "name": "Existed_Raid",
00:17:21.891    "uuid": "2bc6350e-d4d8-4d70-af9d-b3354f7ffd41",
00:17:21.891    "strip_size_kb": 64,
00:17:21.891    "state": "configuring",
00:17:21.891    "raid_level": "raid0",
00:17:21.891    "superblock": true,
00:17:21.891    "num_base_bdevs": 4,
00:17:21.891    "num_base_bdevs_discovered": 2,
00:17:21.891    "num_base_bdevs_operational": 4,
00:17:21.891    "base_bdevs_list": [
00:17:21.891      {
00:17:21.891        "name": "BaseBdev1",
00:17:21.891        "uuid": "790f72ad-c018-4211-a02f-5046df45e567",
00:17:21.891        "is_configured": true,
00:17:21.891        "data_offset": 2048,
00:17:21.891        "data_size": 63488
00:17:21.891      },
00:17:21.891      {
00:17:21.891        "name": "BaseBdev2",
00:17:21.891        "uuid": "6879a3ab-cfec-434d-94a8-39589a0c9d62",
00:17:21.891        "is_configured": true,
00:17:21.891        "data_offset": 2048,
00:17:21.891        "data_size": 63488
00:17:21.891      },
00:17:21.891      {
00:17:21.891        "name": "BaseBdev3",
00:17:21.891        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:21.891        "is_configured": false,
00:17:21.891        "data_offset": 0,
00:17:21.891        "data_size": 0
00:17:21.891      },
00:17:21.891      {
00:17:21.891        "name": "BaseBdev4",
00:17:21.891        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:21.891        "is_configured": false,
00:17:21.891        "data_offset": 0,
00:17:21.891        "data_size": 0
00:17:21.891      }
00:17:21.891    ]
00:17:21.891  }'
00:17:21.891   23:50:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:21.891   23:50:52	-- common/autotest_common.sh@10 -- # set +x
00:17:22.459   23:50:53	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:22.717  [2024-12-13 23:50:53.247002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:22.717  BaseBdev3
00:17:22.717   23:50:53	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:22.717   23:50:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:22.718   23:50:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:22.718   23:50:53	-- common/autotest_common.sh@899 -- # local i
00:17:22.718   23:50:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:22.718   23:50:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:22.718   23:50:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:22.718   23:50:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:22.976  [
00:17:22.976    {
00:17:22.976      "name": "BaseBdev3",
00:17:22.976      "aliases": [
00:17:22.976        "816ac83c-59b4-4e4d-b4b5-295d51dd649b"
00:17:22.976      ],
00:17:22.976      "product_name": "Malloc disk",
00:17:22.976      "block_size": 512,
00:17:22.976      "num_blocks": 65536,
00:17:22.976      "uuid": "816ac83c-59b4-4e4d-b4b5-295d51dd649b",
00:17:22.976      "assigned_rate_limits": {
00:17:22.976        "rw_ios_per_sec": 0,
00:17:22.977        "rw_mbytes_per_sec": 0,
00:17:22.977        "r_mbytes_per_sec": 0,
00:17:22.977        "w_mbytes_per_sec": 0
00:17:22.977      },
00:17:22.977      "claimed": true,
00:17:22.977      "claim_type": "exclusive_write",
00:17:22.977      "zoned": false,
00:17:22.977      "supported_io_types": {
00:17:22.977        "read": true,
00:17:22.977        "write": true,
00:17:22.977        "unmap": true,
00:17:22.977        "write_zeroes": true,
00:17:22.977        "flush": true,
00:17:22.977        "reset": true,
00:17:22.977        "compare": false,
00:17:22.977        "compare_and_write": false,
00:17:22.977        "abort": true,
00:17:22.977        "nvme_admin": false,
00:17:22.977        "nvme_io": false
00:17:22.977      },
00:17:22.977      "memory_domains": [
00:17:22.977        {
00:17:22.977          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:22.977          "dma_device_type": 2
00:17:22.977        }
00:17:22.977      ],
00:17:22.977      "driver_specific": {}
00:17:22.977    }
00:17:22.977  ]
00:17:22.977   23:50:53	-- common/autotest_common.sh@905 -- # return 0
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:22.977   23:50:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:22.977    23:50:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:22.977    23:50:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:23.236   23:50:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:23.236    "name": "Existed_Raid",
00:17:23.236    "uuid": "2bc6350e-d4d8-4d70-af9d-b3354f7ffd41",
00:17:23.236    "strip_size_kb": 64,
00:17:23.236    "state": "configuring",
00:17:23.236    "raid_level": "raid0",
00:17:23.236    "superblock": true,
00:17:23.236    "num_base_bdevs": 4,
00:17:23.236    "num_base_bdevs_discovered": 3,
00:17:23.236    "num_base_bdevs_operational": 4,
00:17:23.236    "base_bdevs_list": [
00:17:23.236      {
00:17:23.236        "name": "BaseBdev1",
00:17:23.236        "uuid": "790f72ad-c018-4211-a02f-5046df45e567",
00:17:23.236        "is_configured": true,
00:17:23.236        "data_offset": 2048,
00:17:23.236        "data_size": 63488
00:17:23.236      },
00:17:23.236      {
00:17:23.236        "name": "BaseBdev2",
00:17:23.236        "uuid": "6879a3ab-cfec-434d-94a8-39589a0c9d62",
00:17:23.236        "is_configured": true,
00:17:23.236        "data_offset": 2048,
00:17:23.236        "data_size": 63488
00:17:23.236      },
00:17:23.236      {
00:17:23.236        "name": "BaseBdev3",
00:17:23.236        "uuid": "816ac83c-59b4-4e4d-b4b5-295d51dd649b",
00:17:23.236        "is_configured": true,
00:17:23.236        "data_offset": 2048,
00:17:23.236        "data_size": 63488
00:17:23.236      },
00:17:23.236      {
00:17:23.236        "name": "BaseBdev4",
00:17:23.236        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:23.236        "is_configured": false,
00:17:23.236        "data_offset": 0,
00:17:23.236        "data_size": 0
00:17:23.236      }
00:17:23.236    ]
00:17:23.236  }'
00:17:23.236   23:50:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:23.236   23:50:53	-- common/autotest_common.sh@10 -- # set +x
00:17:23.803   23:50:54	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:24.062  [2024-12-13 23:50:54.631871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:24.062  [2024-12-13 23:50:54.632162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:17:24.062  [2024-12-13 23:50:54.632177] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:24.062  [2024-12-13 23:50:54.632318] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:17:24.062  [2024-12-13 23:50:54.632674] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:17:24.062  [2024-12-13 23:50:54.632689] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:17:24.062  [2024-12-13 23:50:54.632844] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:24.062  BaseBdev4
00:17:24.062   23:50:54	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:24.062   23:50:54	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:24.063   23:50:54	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:24.063   23:50:54	-- common/autotest_common.sh@899 -- # local i
00:17:24.063   23:50:54	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:24.063   23:50:54	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:24.063   23:50:54	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:24.321   23:50:54	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:24.321  [
00:17:24.321    {
00:17:24.321      "name": "BaseBdev4",
00:17:24.321      "aliases": [
00:17:24.321        "57a50efb-319c-48fa-9c75-3768554b60d8"
00:17:24.321      ],
00:17:24.321      "product_name": "Malloc disk",
00:17:24.321      "block_size": 512,
00:17:24.321      "num_blocks": 65536,
00:17:24.321      "uuid": "57a50efb-319c-48fa-9c75-3768554b60d8",
00:17:24.321      "assigned_rate_limits": {
00:17:24.321        "rw_ios_per_sec": 0,
00:17:24.321        "rw_mbytes_per_sec": 0,
00:17:24.321        "r_mbytes_per_sec": 0,
00:17:24.321        "w_mbytes_per_sec": 0
00:17:24.321      },
00:17:24.321      "claimed": true,
00:17:24.321      "claim_type": "exclusive_write",
00:17:24.321      "zoned": false,
00:17:24.321      "supported_io_types": {
00:17:24.321        "read": true,
00:17:24.321        "write": true,
00:17:24.321        "unmap": true,
00:17:24.321        "write_zeroes": true,
00:17:24.321        "flush": true,
00:17:24.321        "reset": true,
00:17:24.321        "compare": false,
00:17:24.321        "compare_and_write": false,
00:17:24.321        "abort": true,
00:17:24.321        "nvme_admin": false,
00:17:24.321        "nvme_io": false
00:17:24.321      },
00:17:24.321      "memory_domains": [
00:17:24.321        {
00:17:24.321          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:24.321          "dma_device_type": 2
00:17:24.321        }
00:17:24.321      ],
00:17:24.321      "driver_specific": {}
00:17:24.321    }
00:17:24.321  ]
00:17:24.321   23:50:55	-- common/autotest_common.sh@905 -- # return 0
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:24.321   23:50:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:24.321    23:50:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:24.321    23:50:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:24.580   23:50:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:24.580    "name": "Existed_Raid",
00:17:24.580    "uuid": "2bc6350e-d4d8-4d70-af9d-b3354f7ffd41",
00:17:24.580    "strip_size_kb": 64,
00:17:24.580    "state": "online",
00:17:24.580    "raid_level": "raid0",
00:17:24.580    "superblock": true,
00:17:24.580    "num_base_bdevs": 4,
00:17:24.580    "num_base_bdevs_discovered": 4,
00:17:24.580    "num_base_bdevs_operational": 4,
00:17:24.580    "base_bdevs_list": [
00:17:24.580      {
00:17:24.580        "name": "BaseBdev1",
00:17:24.580        "uuid": "790f72ad-c018-4211-a02f-5046df45e567",
00:17:24.580        "is_configured": true,
00:17:24.580        "data_offset": 2048,
00:17:24.580        "data_size": 63488
00:17:24.580      },
00:17:24.580      {
00:17:24.580        "name": "BaseBdev2",
00:17:24.580        "uuid": "6879a3ab-cfec-434d-94a8-39589a0c9d62",
00:17:24.580        "is_configured": true,
00:17:24.580        "data_offset": 2048,
00:17:24.580        "data_size": 63488
00:17:24.580      },
00:17:24.580      {
00:17:24.580        "name": "BaseBdev3",
00:17:24.580        "uuid": "816ac83c-59b4-4e4d-b4b5-295d51dd649b",
00:17:24.580        "is_configured": true,
00:17:24.580        "data_offset": 2048,
00:17:24.580        "data_size": 63488
00:17:24.580      },
00:17:24.580      {
00:17:24.580        "name": "BaseBdev4",
00:17:24.580        "uuid": "57a50efb-319c-48fa-9c75-3768554b60d8",
00:17:24.580        "is_configured": true,
00:17:24.580        "data_offset": 2048,
00:17:24.580        "data_size": 63488
00:17:24.580      }
00:17:24.580    ]
00:17:24.580  }'
00:17:24.580   23:50:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:24.580   23:50:55	-- common/autotest_common.sh@10 -- # set +x
00:17:25.516   23:50:55	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:25.516  [2024-12-13 23:50:56.064348] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:25.516  [2024-12-13 23:50:56.064378] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:25.516  [2024-12-13 23:50:56.064450] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:25.516   23:50:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:25.516    23:50:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:25.516    23:50:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:25.775   23:50:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:25.775    "name": "Existed_Raid",
00:17:25.775    "uuid": "2bc6350e-d4d8-4d70-af9d-b3354f7ffd41",
00:17:25.775    "strip_size_kb": 64,
00:17:25.775    "state": "offline",
00:17:25.775    "raid_level": "raid0",
00:17:25.775    "superblock": true,
00:17:25.775    "num_base_bdevs": 4,
00:17:25.775    "num_base_bdevs_discovered": 3,
00:17:25.775    "num_base_bdevs_operational": 3,
00:17:25.775    "base_bdevs_list": [
00:17:25.775      {
00:17:25.775        "name": null,
00:17:25.775        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:25.775        "is_configured": false,
00:17:25.775        "data_offset": 2048,
00:17:25.775        "data_size": 63488
00:17:25.775      },
00:17:25.775      {
00:17:25.775        "name": "BaseBdev2",
00:17:25.775        "uuid": "6879a3ab-cfec-434d-94a8-39589a0c9d62",
00:17:25.775        "is_configured": true,
00:17:25.775        "data_offset": 2048,
00:17:25.775        "data_size": 63488
00:17:25.775      },
00:17:25.775      {
00:17:25.775        "name": "BaseBdev3",
00:17:25.775        "uuid": "816ac83c-59b4-4e4d-b4b5-295d51dd649b",
00:17:25.775        "is_configured": true,
00:17:25.775        "data_offset": 2048,
00:17:25.775        "data_size": 63488
00:17:25.775      },
00:17:25.775      {
00:17:25.775        "name": "BaseBdev4",
00:17:25.775        "uuid": "57a50efb-319c-48fa-9c75-3768554b60d8",
00:17:25.775        "is_configured": true,
00:17:25.775        "data_offset": 2048,
00:17:25.775        "data_size": 63488
00:17:25.775      }
00:17:25.775    ]
00:17:25.775  }'
00:17:25.775   23:50:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:25.775   23:50:56	-- common/autotest_common.sh@10 -- # set +x
00:17:26.342   23:50:57	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:26.342   23:50:57	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:26.342    23:50:57	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:26.342    23:50:57	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:26.600   23:50:57	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:26.600   23:50:57	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:26.600   23:50:57	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:26.859  [2024-12-13 23:50:57.534247] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:27.117   23:50:57	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:27.117   23:50:57	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:27.117    23:50:57	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:27.117    23:50:57	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:27.375   23:50:57	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:27.375   23:50:57	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:27.375   23:50:57	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:27.375  [2024-12-13 23:50:58.042048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:27.635   23:50:58	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:27.635   23:50:58	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:27.635    23:50:58	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:27.635    23:50:58	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:27.635   23:50:58	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:27.635   23:50:58	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:27.635   23:50:58	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:27.893  [2024-12-13 23:50:58.485193] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:27.893  [2024-12-13 23:50:58.485246] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:17:27.894   23:50:58	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:27.894   23:50:58	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:27.894    23:50:58	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:27.894    23:50:58	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:28.153   23:50:58	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:28.153   23:50:58	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:28.153   23:50:58	-- bdev/bdev_raid.sh@287 -- # killprocess 118786
00:17:28.153   23:50:58	-- common/autotest_common.sh@936 -- # '[' -z 118786 ']'
00:17:28.153   23:50:58	-- common/autotest_common.sh@940 -- # kill -0 118786
00:17:28.153    23:50:58	-- common/autotest_common.sh@941 -- # uname
00:17:28.153   23:50:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:28.153    23:50:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118786
00:17:28.153   23:50:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:28.153   23:50:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:28.153  killing process with pid 118786
00:17:28.153   23:50:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 118786'
00:17:28.153   23:50:58	-- common/autotest_common.sh@955 -- # kill 118786
00:17:28.153   23:50:58	-- common/autotest_common.sh@960 -- # wait 118786
00:17:28.153  [2024-12-13 23:50:58.783653] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:28.153  [2024-12-13 23:50:58.783861] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:29.089   23:50:59	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:29.089  
00:17:29.089  real	0m14.279s
00:17:29.089  user	0m25.404s
00:17:29.089  sys	0m1.768s
00:17:29.089   23:50:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:29.089  ************************************
00:17:29.089  END TEST raid_state_function_test_sb
00:17:29.089  ************************************
00:17:29.089   23:50:59	-- common/autotest_common.sh@10 -- # set +x
00:17:29.089   23:50:59	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4
00:17:29.089   23:50:59	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:17:29.089   23:50:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:29.089   23:50:59	-- common/autotest_common.sh@10 -- # set +x
00:17:29.089  ************************************
00:17:29.089  START TEST raid_superblock_test
00:17:29.089  ************************************
00:17:29.090   23:50:59	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@357 -- # raid_pid=119233
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@358 -- # waitforlisten 119233 /var/tmp/spdk-raid.sock
00:17:29.090   23:50:59	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:17:29.090   23:50:59	-- common/autotest_common.sh@829 -- # '[' -z 119233 ']'
00:17:29.090   23:50:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:29.090   23:50:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:29.090  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:29.090   23:50:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:29.090   23:50:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:29.090   23:50:59	-- common/autotest_common.sh@10 -- # set +x
00:17:29.349  [2024-12-13 23:50:59.831980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:29.349  [2024-12-13 23:50:59.832783] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119233 ]
00:17:29.349  [2024-12-13 23:51:00.005563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:29.607  [2024-12-13 23:51:00.234599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:29.866  [2024-12-13 23:51:00.419314] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:30.125   23:51:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:30.125   23:51:00	-- common/autotest_common.sh@862 -- # return 0
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:30.125   23:51:00	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:17:30.384  malloc1
00:17:30.384   23:51:00	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:30.643  [2024-12-13 23:51:01.125174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:30.643  [2024-12-13 23:51:01.125774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:30.643  [2024-12-13 23:51:01.125947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:17:30.643  [2024-12-13 23:51:01.126135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:30.643  [2024-12-13 23:51:01.128541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:30.643  [2024-12-13 23:51:01.128707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:30.643  pt1
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:17:30.643  malloc2
00:17:30.643   23:51:01	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:30.902  [2024-12-13 23:51:01.582818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:30.902  [2024-12-13 23:51:01.583038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:30.902  [2024-12-13 23:51:01.583214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:17:30.902  [2024-12-13 23:51:01.583385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:30.902  [2024-12-13 23:51:01.585745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:30.902  [2024-12-13 23:51:01.585910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:30.902  pt2
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:30.902   23:51:01	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:17:31.161  malloc3
00:17:31.161   23:51:01	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:17:31.420  [2024-12-13 23:51:02.035753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:17:31.420  [2024-12-13 23:51:02.035847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:31.420  [2024-12-13 23:51:02.035894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:17:31.420  [2024-12-13 23:51:02.035940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:31.420  [2024-12-13 23:51:02.038403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:31.420  [2024-12-13 23:51:02.038490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:17:31.420  pt3
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:31.420   23:51:02	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:17:31.679  malloc4
00:17:31.679   23:51:02	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:17:31.937  [2024-12-13 23:51:02.532280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:17:31.937  [2024-12-13 23:51:02.532351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:31.937  [2024-12-13 23:51:02.532384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:17:31.937  [2024-12-13 23:51:02.532428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:31.937  [2024-12-13 23:51:02.534706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:31.937  [2024-12-13 23:51:02.534760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:17:31.937  pt4
00:17:31.937   23:51:02	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:31.937   23:51:02	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:31.937   23:51:02	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:17:32.196  [2024-12-13 23:51:02.776389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:32.196  [2024-12-13 23:51:02.778376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:32.196  [2024-12-13 23:51:02.778473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:17:32.196  [2024-12-13 23:51:02.778561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:17:32.196  [2024-12-13 23:51:02.778793] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380
00:17:32.196  [2024-12-13 23:51:02.778817] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:32.196  [2024-12-13 23:51:02.778931] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:17:32.196  [2024-12-13 23:51:02.779291] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380
00:17:32.196  [2024-12-13 23:51:02.779314] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380
00:17:32.196  [2024-12-13 23:51:02.779459] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:32.196   23:51:02	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:32.196    23:51:02	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:32.196    23:51:02	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:32.455   23:51:02	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:32.455    "name": "raid_bdev1",
00:17:32.455    "uuid": "eb7fe0b3-41de-40a1-acf1-d69fd1ff5381",
00:17:32.455    "strip_size_kb": 64,
00:17:32.455    "state": "online",
00:17:32.455    "raid_level": "raid0",
00:17:32.455    "superblock": true,
00:17:32.455    "num_base_bdevs": 4,
00:17:32.455    "num_base_bdevs_discovered": 4,
00:17:32.455    "num_base_bdevs_operational": 4,
00:17:32.455    "base_bdevs_list": [
00:17:32.455      {
00:17:32.455        "name": "pt1",
00:17:32.455        "uuid": "5aab2491-1a3b-5b63-a50d-fb819c00b619",
00:17:32.455        "is_configured": true,
00:17:32.455        "data_offset": 2048,
00:17:32.455        "data_size": 63488
00:17:32.455      },
00:17:32.455      {
00:17:32.455        "name": "pt2",
00:17:32.455        "uuid": "40b74750-35fa-5b0b-a06d-2d745b925004",
00:17:32.455        "is_configured": true,
00:17:32.455        "data_offset": 2048,
00:17:32.455        "data_size": 63488
00:17:32.455      },
00:17:32.455      {
00:17:32.455        "name": "pt3",
00:17:32.455        "uuid": "5842bbc6-79b8-5e2e-a5cb-edc2092ffa6b",
00:17:32.455        "is_configured": true,
00:17:32.455        "data_offset": 2048,
00:17:32.455        "data_size": 63488
00:17:32.455      },
00:17:32.455      {
00:17:32.455        "name": "pt4",
00:17:32.455        "uuid": "e6509fcf-ebdd-591c-8f65-d94530457991",
00:17:32.455        "is_configured": true,
00:17:32.455        "data_offset": 2048,
00:17:32.455        "data_size": 63488
00:17:32.455      }
00:17:32.455    ]
00:17:32.455  }'
00:17:32.455   23:51:02	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:32.455   23:51:02	-- common/autotest_common.sh@10 -- # set +x
00:17:33.023    23:51:03	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:17:33.023    23:51:03	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:17:33.282  [2024-12-13 23:51:03.764652] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:33.282   23:51:03	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=eb7fe0b3-41de-40a1-acf1-d69fd1ff5381
00:17:33.282   23:51:03	-- bdev/bdev_raid.sh@380 -- # '[' -z eb7fe0b3-41de-40a1-acf1-d69fd1ff5381 ']'
00:17:33.282   23:51:03	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:17:33.282  [2024-12-13 23:51:03.952495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:33.282  [2024-12-13 23:51:03.952520] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:33.282  [2024-12-13 23:51:03.952592] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:33.282  [2024-12-13 23:51:03.952650] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:33.282  [2024-12-13 23:51:03.952660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline
00:17:33.282    23:51:03	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:33.282    23:51:03	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:17:33.541   23:51:04	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:17:33.541   23:51:04	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:17:33.541   23:51:04	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:33.541   23:51:04	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:17:33.799   23:51:04	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:33.799   23:51:04	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:17:34.058   23:51:04	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:34.058   23:51:04	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:17:34.317   23:51:04	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:34.317   23:51:04	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:17:34.317    23:51:05	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:17:34.317    23:51:05	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:17:34.577   23:51:05	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:17:34.577   23:51:05	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:34.577   23:51:05	-- common/autotest_common.sh@650 -- # local es=0
00:17:34.577   23:51:05	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:34.577   23:51:05	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:34.577   23:51:05	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:34.577    23:51:05	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:34.577   23:51:05	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:34.577    23:51:05	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:34.577   23:51:05	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:34.577   23:51:05	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:34.577   23:51:05	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:17:34.577   23:51:05	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:34.835  [2024-12-13 23:51:05.476715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:17:34.835  [2024-12-13 23:51:05.478473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:17:34.835  [2024-12-13 23:51:05.478527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:17:34.835  [2024-12-13 23:51:05.478572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:17:34.835  [2024-12-13 23:51:05.478621] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:17:34.835  [2024-12-13 23:51:05.478692] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:17:34.835  [2024-12-13 23:51:05.478724] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:17:34.835  [2024-12-13 23:51:05.478816] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:17:34.835  [2024-12-13 23:51:05.478843] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:34.835  [2024-12-13 23:51:05.478853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring
00:17:34.835  request:
00:17:34.835  {
00:17:34.835    "name": "raid_bdev1",
00:17:34.835    "raid_level": "raid0",
00:17:34.835    "base_bdevs": [
00:17:34.835      "malloc1",
00:17:34.835      "malloc2",
00:17:34.835      "malloc3",
00:17:34.835      "malloc4"
00:17:34.835    ],
00:17:34.835    "superblock": false,
00:17:34.835    "strip_size_kb": 64,
00:17:34.835    "method": "bdev_raid_create",
00:17:34.835    "req_id": 1
00:17:34.835  }
00:17:34.835  Got JSON-RPC error response
00:17:34.835  response:
00:17:34.835  {
00:17:34.835    "code": -17,
00:17:34.835    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:17:34.835  }
00:17:34.835   23:51:05	-- common/autotest_common.sh@653 -- # es=1
00:17:34.835   23:51:05	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:34.835   23:51:05	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:34.835   23:51:05	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:34.835    23:51:05	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:34.835    23:51:05	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:17:35.094   23:51:05	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:17:35.094   23:51:05	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:17:35.094   23:51:05	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:35.353  [2024-12-13 23:51:05.912743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:35.353  [2024-12-13 23:51:05.912810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:35.353  [2024-12-13 23:51:05.912842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:17:35.353  [2024-12-13 23:51:05.912869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:35.353  [2024-12-13 23:51:05.915172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:35.353  [2024-12-13 23:51:05.915241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:35.353  [2024-12-13 23:51:05.915331] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:17:35.353  [2024-12-13 23:51:05.915386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:35.353  pt1
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:35.353   23:51:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:35.353    23:51:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:35.353    23:51:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:35.612   23:51:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:35.612    "name": "raid_bdev1",
00:17:35.612    "uuid": "eb7fe0b3-41de-40a1-acf1-d69fd1ff5381",
00:17:35.612    "strip_size_kb": 64,
00:17:35.612    "state": "configuring",
00:17:35.612    "raid_level": "raid0",
00:17:35.612    "superblock": true,
00:17:35.612    "num_base_bdevs": 4,
00:17:35.612    "num_base_bdevs_discovered": 1,
00:17:35.612    "num_base_bdevs_operational": 4,
00:17:35.612    "base_bdevs_list": [
00:17:35.612      {
00:17:35.612        "name": "pt1",
00:17:35.612        "uuid": "5aab2491-1a3b-5b63-a50d-fb819c00b619",
00:17:35.612        "is_configured": true,
00:17:35.613        "data_offset": 2048,
00:17:35.613        "data_size": 63488
00:17:35.613      },
00:17:35.613      {
00:17:35.613        "name": null,
00:17:35.613        "uuid": "40b74750-35fa-5b0b-a06d-2d745b925004",
00:17:35.613        "is_configured": false,
00:17:35.613        "data_offset": 2048,
00:17:35.613        "data_size": 63488
00:17:35.613      },
00:17:35.613      {
00:17:35.613        "name": null,
00:17:35.613        "uuid": "5842bbc6-79b8-5e2e-a5cb-edc2092ffa6b",
00:17:35.613        "is_configured": false,
00:17:35.613        "data_offset": 2048,
00:17:35.613        "data_size": 63488
00:17:35.613      },
00:17:35.613      {
00:17:35.613        "name": null,
00:17:35.613        "uuid": "e6509fcf-ebdd-591c-8f65-d94530457991",
00:17:35.613        "is_configured": false,
00:17:35.613        "data_offset": 2048,
00:17:35.613        "data_size": 63488
00:17:35.613      }
00:17:35.613    ]
00:17:35.613  }'
00:17:35.613   23:51:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:35.613   23:51:06	-- common/autotest_common.sh@10 -- # set +x
00:17:36.181   23:51:06	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:17:36.181   23:51:06	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:36.439  [2024-12-13 23:51:07.011263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:36.439  [2024-12-13 23:51:07.011330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:36.439  [2024-12-13 23:51:07.011369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:17:36.439  [2024-12-13 23:51:07.011389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:36.439  [2024-12-13 23:51:07.011802] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:36.439  [2024-12-13 23:51:07.011857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:36.440  [2024-12-13 23:51:07.011944] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:36.440  [2024-12-13 23:51:07.011968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:36.440  pt2
00:17:36.440   23:51:07	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:17:36.698  [2024-12-13 23:51:07.275311] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:36.698   23:51:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:36.698    23:51:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:36.698    23:51:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:36.957   23:51:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:36.957    "name": "raid_bdev1",
00:17:36.957    "uuid": "eb7fe0b3-41de-40a1-acf1-d69fd1ff5381",
00:17:36.957    "strip_size_kb": 64,
00:17:36.957    "state": "configuring",
00:17:36.957    "raid_level": "raid0",
00:17:36.957    "superblock": true,
00:17:36.957    "num_base_bdevs": 4,
00:17:36.957    "num_base_bdevs_discovered": 1,
00:17:36.957    "num_base_bdevs_operational": 4,
00:17:36.957    "base_bdevs_list": [
00:17:36.957      {
00:17:36.957        "name": "pt1",
00:17:36.957        "uuid": "5aab2491-1a3b-5b63-a50d-fb819c00b619",
00:17:36.957        "is_configured": true,
00:17:36.957        "data_offset": 2048,
00:17:36.957        "data_size": 63488
00:17:36.957      },
00:17:36.957      {
00:17:36.957        "name": null,
00:17:36.957        "uuid": "40b74750-35fa-5b0b-a06d-2d745b925004",
00:17:36.957        "is_configured": false,
00:17:36.957        "data_offset": 2048,
00:17:36.957        "data_size": 63488
00:17:36.957      },
00:17:36.957      {
00:17:36.957        "name": null,
00:17:36.957        "uuid": "5842bbc6-79b8-5e2e-a5cb-edc2092ffa6b",
00:17:36.957        "is_configured": false,
00:17:36.957        "data_offset": 2048,
00:17:36.957        "data_size": 63488
00:17:36.957      },
00:17:36.957      {
00:17:36.957        "name": null,
00:17:36.957        "uuid": "e6509fcf-ebdd-591c-8f65-d94530457991",
00:17:36.957        "is_configured": false,
00:17:36.957        "data_offset": 2048,
00:17:36.957        "data_size": 63488
00:17:36.957      }
00:17:36.957    ]
00:17:36.957  }'
00:17:36.957   23:51:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:36.957   23:51:07	-- common/autotest_common.sh@10 -- # set +x
00:17:37.525   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:17:37.525   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:37.525   23:51:08	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:37.783  [2024-12-13 23:51:08.277967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:37.783  [2024-12-13 23:51:08.278049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:37.783  [2024-12-13 23:51:08.278092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:17:37.783  [2024-12-13 23:51:08.278114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:37.783  [2024-12-13 23:51:08.278602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:37.783  [2024-12-13 23:51:08.278661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:37.783  [2024-12-13 23:51:08.278761] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:37.783  [2024-12-13 23:51:08.278787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:37.783  pt2
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:17:37.783  [2024-12-13 23:51:08.459036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:17:37.783  [2024-12-13 23:51:08.459096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:37.783  [2024-12-13 23:51:08.459126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:17:37.783  [2024-12-13 23:51:08.459149] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:37.783  [2024-12-13 23:51:08.459521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:37.783  [2024-12-13 23:51:08.459575] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:17:37.783  [2024-12-13 23:51:08.459667] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:17:37.783  [2024-12-13 23:51:08.459704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:17:37.783  pt3
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:37.783   23:51:08	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:17:38.041  [2024-12-13 23:51:08.639263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:17:38.041  [2024-12-13 23:51:08.639458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:38.041  [2024-12-13 23:51:08.639560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:17:38.041  [2024-12-13 23:51:08.639632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:38.041  [2024-12-13 23:51:08.640414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:38.041  [2024-12-13 23:51:08.640512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:17:38.041  [2024-12-13 23:51:08.640660] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:17:38.041  [2024-12-13 23:51:08.640699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:17:38.041  [2024-12-13 23:51:08.640893] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:17:38.041  [2024-12-13 23:51:08.640926] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:38.041  [2024-12-13 23:51:08.641070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:17:38.041  [2024-12-13 23:51:08.641543] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:17:38.041  [2024-12-13 23:51:08.641595] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:17:38.041  [2024-12-13 23:51:08.641786] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:38.041  pt4
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:38.041   23:51:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:38.041    23:51:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:38.041    23:51:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:38.300   23:51:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:38.300    "name": "raid_bdev1",
00:17:38.300    "uuid": "eb7fe0b3-41de-40a1-acf1-d69fd1ff5381",
00:17:38.300    "strip_size_kb": 64,
00:17:38.300    "state": "online",
00:17:38.300    "raid_level": "raid0",
00:17:38.300    "superblock": true,
00:17:38.300    "num_base_bdevs": 4,
00:17:38.300    "num_base_bdevs_discovered": 4,
00:17:38.300    "num_base_bdevs_operational": 4,
00:17:38.300    "base_bdevs_list": [
00:17:38.300      {
00:17:38.300        "name": "pt1",
00:17:38.300        "uuid": "5aab2491-1a3b-5b63-a50d-fb819c00b619",
00:17:38.300        "is_configured": true,
00:17:38.300        "data_offset": 2048,
00:17:38.300        "data_size": 63488
00:17:38.300      },
00:17:38.300      {
00:17:38.300        "name": "pt2",
00:17:38.300        "uuid": "40b74750-35fa-5b0b-a06d-2d745b925004",
00:17:38.300        "is_configured": true,
00:17:38.300        "data_offset": 2048,
00:17:38.300        "data_size": 63488
00:17:38.300      },
00:17:38.300      {
00:17:38.300        "name": "pt3",
00:17:38.300        "uuid": "5842bbc6-79b8-5e2e-a5cb-edc2092ffa6b",
00:17:38.300        "is_configured": true,
00:17:38.300        "data_offset": 2048,
00:17:38.300        "data_size": 63488
00:17:38.300      },
00:17:38.300      {
00:17:38.300        "name": "pt4",
00:17:38.300        "uuid": "e6509fcf-ebdd-591c-8f65-d94530457991",
00:17:38.300        "is_configured": true,
00:17:38.300        "data_offset": 2048,
00:17:38.300        "data_size": 63488
00:17:38.300      }
00:17:38.300    ]
00:17:38.300  }'
00:17:38.300   23:51:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:38.300   23:51:08	-- common/autotest_common.sh@10 -- # set +x
00:17:38.867    23:51:09	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:17:38.867    23:51:09	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:17:39.125  [2024-12-13 23:51:09.651436] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:39.125   23:51:09	-- bdev/bdev_raid.sh@430 -- # '[' eb7fe0b3-41de-40a1-acf1-d69fd1ff5381 '!=' eb7fe0b3-41de-40a1-acf1-d69fd1ff5381 ']'
00:17:39.125   23:51:09	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:17:39.125   23:51:09	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:39.125   23:51:09	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:39.125   23:51:09	-- bdev/bdev_raid.sh@511 -- # killprocess 119233
00:17:39.125   23:51:09	-- common/autotest_common.sh@936 -- # '[' -z 119233 ']'
00:17:39.125   23:51:09	-- common/autotest_common.sh@940 -- # kill -0 119233
00:17:39.125    23:51:09	-- common/autotest_common.sh@941 -- # uname
00:17:39.125   23:51:09	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:39.125    23:51:09	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119233
00:17:39.125   23:51:09	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:39.125   23:51:09	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:39.125   23:51:09	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119233'
00:17:39.125  killing process with pid 119233
00:17:39.125   23:51:09	-- common/autotest_common.sh@955 -- # kill 119233
00:17:39.125  [2024-12-13 23:51:09.691912] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:39.125  [2024-12-13 23:51:09.692013] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:39.125  [2024-12-13 23:51:09.692076] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:39.125  [2024-12-13 23:51:09.692088] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:17:39.125   23:51:09	-- common/autotest_common.sh@960 -- # wait 119233
00:17:39.384  [2024-12-13 23:51:09.965116] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:40.320  ************************************
00:17:40.320  END TEST raid_superblock_test
00:17:40.320  ************************************
00:17:40.320   23:51:10	-- bdev/bdev_raid.sh@513 -- # return 0
00:17:40.320  
00:17:40.320  real	0m11.216s
00:17:40.320  user	0m19.349s
00:17:40.320  sys	0m1.414s
00:17:40.320   23:51:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:40.320   23:51:10	-- common/autotest_common.sh@10 -- # set +x
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false
00:17:40.320   23:51:11	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:40.320   23:51:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:40.320   23:51:11	-- common/autotest_common.sh@10 -- # set +x
00:17:40.320  ************************************
00:17:40.320  START TEST raid_state_function_test
00:17:40.320  ************************************
00:17:40.320   23:51:11	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:17:40.320   23:51:11	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:40.321    23:51:11	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@226 -- # raid_pid=119556
00:17:40.321  Process raid pid: 119556
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119556'
00:17:40.321   23:51:11	-- bdev/bdev_raid.sh@228 -- # waitforlisten 119556 /var/tmp/spdk-raid.sock
00:17:40.321   23:51:11	-- common/autotest_common.sh@829 -- # '[' -z 119556 ']'
00:17:40.321   23:51:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:40.321   23:51:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:40.321  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:40.321   23:51:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:40.321   23:51:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:40.321   23:51:11	-- common/autotest_common.sh@10 -- # set +x
00:17:40.579  [2024-12-13 23:51:11.098463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:40.579  [2024-12-13 23:51:11.098635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:40.579  [2024-12-13 23:51:11.256694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:40.838  [2024-12-13 23:51:11.497138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:41.097  [2024-12-13 23:51:11.684699] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:41.356   23:51:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:41.356   23:51:11	-- common/autotest_common.sh@862 -- # return 0
00:17:41.356   23:51:11	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:41.614  [2024-12-13 23:51:12.128096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:41.614  [2024-12-13 23:51:12.128171] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:41.614  [2024-12-13 23:51:12.128183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:41.614  [2024-12-13 23:51:12.128205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:41.614  [2024-12-13 23:51:12.128212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:41.615  [2024-12-13 23:51:12.128248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:41.615  [2024-12-13 23:51:12.128256] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:41.615  [2024-12-13 23:51:12.128278] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:41.615   23:51:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:41.615    23:51:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:41.615    23:51:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:41.873   23:51:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:41.873    "name": "Existed_Raid",
00:17:41.873    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:41.873    "strip_size_kb": 64,
00:17:41.873    "state": "configuring",
00:17:41.873    "raid_level": "concat",
00:17:41.873    "superblock": false,
00:17:41.873    "num_base_bdevs": 4,
00:17:41.873    "num_base_bdevs_discovered": 0,
00:17:41.873    "num_base_bdevs_operational": 4,
00:17:41.873    "base_bdevs_list": [
00:17:41.873      {
00:17:41.873        "name": "BaseBdev1",
00:17:41.873        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:41.873        "is_configured": false,
00:17:41.873        "data_offset": 0,
00:17:41.873        "data_size": 0
00:17:41.873      },
00:17:41.873      {
00:17:41.873        "name": "BaseBdev2",
00:17:41.873        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:41.873        "is_configured": false,
00:17:41.873        "data_offset": 0,
00:17:41.873        "data_size": 0
00:17:41.873      },
00:17:41.873      {
00:17:41.874        "name": "BaseBdev3",
00:17:41.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:41.874        "is_configured": false,
00:17:41.874        "data_offset": 0,
00:17:41.874        "data_size": 0
00:17:41.874      },
00:17:41.874      {
00:17:41.874        "name": "BaseBdev4",
00:17:41.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:41.874        "is_configured": false,
00:17:41.874        "data_offset": 0,
00:17:41.874        "data_size": 0
00:17:41.874      }
00:17:41.874    ]
00:17:41.874  }'
00:17:41.874   23:51:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:41.874   23:51:12	-- common/autotest_common.sh@10 -- # set +x
00:17:42.441   23:51:13	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:42.699  [2024-12-13 23:51:13.260157] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:42.699  [2024-12-13 23:51:13.260193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:17:42.699   23:51:13	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:42.958  [2024-12-13 23:51:13.448230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:42.958  [2024-12-13 23:51:13.448292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:42.958  [2024-12-13 23:51:13.448303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:42.958  [2024-12-13 23:51:13.448328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:42.958  [2024-12-13 23:51:13.448336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:42.958  [2024-12-13 23:51:13.448370] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:42.958  [2024-12-13 23:51:13.448378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:42.958  [2024-12-13 23:51:13.448400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:42.958   23:51:13	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:43.216  [2024-12-13 23:51:13.733927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:43.216  BaseBdev1
00:17:43.216   23:51:13	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:43.216   23:51:13	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:43.217   23:51:13	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:43.217   23:51:13	-- common/autotest_common.sh@899 -- # local i
00:17:43.217   23:51:13	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:43.217   23:51:13	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:43.217   23:51:13	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:43.475   23:51:13	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:43.734  [
00:17:43.734    {
00:17:43.734      "name": "BaseBdev1",
00:17:43.734      "aliases": [
00:17:43.734        "5c453a4f-0077-4f5c-b820-435dccc14b4c"
00:17:43.734      ],
00:17:43.734      "product_name": "Malloc disk",
00:17:43.734      "block_size": 512,
00:17:43.734      "num_blocks": 65536,
00:17:43.734      "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:43.734      "assigned_rate_limits": {
00:17:43.734        "rw_ios_per_sec": 0,
00:17:43.734        "rw_mbytes_per_sec": 0,
00:17:43.734        "r_mbytes_per_sec": 0,
00:17:43.734        "w_mbytes_per_sec": 0
00:17:43.734      },
00:17:43.734      "claimed": true,
00:17:43.734      "claim_type": "exclusive_write",
00:17:43.734      "zoned": false,
00:17:43.734      "supported_io_types": {
00:17:43.734        "read": true,
00:17:43.734        "write": true,
00:17:43.734        "unmap": true,
00:17:43.734        "write_zeroes": true,
00:17:43.734        "flush": true,
00:17:43.734        "reset": true,
00:17:43.734        "compare": false,
00:17:43.734        "compare_and_write": false,
00:17:43.734        "abort": true,
00:17:43.734        "nvme_admin": false,
00:17:43.734        "nvme_io": false
00:17:43.734      },
00:17:43.734      "memory_domains": [
00:17:43.734        {
00:17:43.734          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:43.734          "dma_device_type": 2
00:17:43.734        }
00:17:43.734      ],
00:17:43.734      "driver_specific": {}
00:17:43.734    }
00:17:43.734  ]
00:17:43.734   23:51:14	-- common/autotest_common.sh@905 -- # return 0
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:43.734    23:51:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:43.734    23:51:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:43.734    "name": "Existed_Raid",
00:17:43.734    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.734    "strip_size_kb": 64,
00:17:43.734    "state": "configuring",
00:17:43.734    "raid_level": "concat",
00:17:43.734    "superblock": false,
00:17:43.734    "num_base_bdevs": 4,
00:17:43.734    "num_base_bdevs_discovered": 1,
00:17:43.734    "num_base_bdevs_operational": 4,
00:17:43.734    "base_bdevs_list": [
00:17:43.734      {
00:17:43.734        "name": "BaseBdev1",
00:17:43.734        "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:43.734        "is_configured": true,
00:17:43.734        "data_offset": 0,
00:17:43.734        "data_size": 65536
00:17:43.734      },
00:17:43.734      {
00:17:43.734        "name": "BaseBdev2",
00:17:43.734        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.734        "is_configured": false,
00:17:43.734        "data_offset": 0,
00:17:43.734        "data_size": 0
00:17:43.734      },
00:17:43.734      {
00:17:43.734        "name": "BaseBdev3",
00:17:43.734        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.734        "is_configured": false,
00:17:43.734        "data_offset": 0,
00:17:43.734        "data_size": 0
00:17:43.734      },
00:17:43.734      {
00:17:43.734        "name": "BaseBdev4",
00:17:43.734        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.734        "is_configured": false,
00:17:43.734        "data_offset": 0,
00:17:43.734        "data_size": 0
00:17:43.734      }
00:17:43.734    ]
00:17:43.734  }'
00:17:43.734   23:51:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:43.734   23:51:14	-- common/autotest_common.sh@10 -- # set +x
00:17:44.302   23:51:14	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:44.560  [2024-12-13 23:51:15.158204] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:44.560  [2024-12-13 23:51:15.158256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:17:44.560   23:51:15	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:17:44.560   23:51:15	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:44.817  [2024-12-13 23:51:15.350301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:44.817  [2024-12-13 23:51:15.352171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:44.817  [2024-12-13 23:51:15.352249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:44.817  [2024-12-13 23:51:15.352260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:44.817  [2024-12-13 23:51:15.352286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:44.817  [2024-12-13 23:51:15.352293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:44.817  [2024-12-13 23:51:15.352310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:44.817   23:51:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:44.818    23:51:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:44.818    23:51:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:44.818   23:51:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:44.818    "name": "Existed_Raid",
00:17:44.818    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:44.818    "strip_size_kb": 64,
00:17:44.818    "state": "configuring",
00:17:44.818    "raid_level": "concat",
00:17:44.818    "superblock": false,
00:17:44.818    "num_base_bdevs": 4,
00:17:44.818    "num_base_bdevs_discovered": 1,
00:17:44.818    "num_base_bdevs_operational": 4,
00:17:44.818    "base_bdevs_list": [
00:17:44.818      {
00:17:44.818        "name": "BaseBdev1",
00:17:44.818        "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:44.818        "is_configured": true,
00:17:44.818        "data_offset": 0,
00:17:44.818        "data_size": 65536
00:17:44.818      },
00:17:44.818      {
00:17:44.818        "name": "BaseBdev2",
00:17:44.818        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:44.818        "is_configured": false,
00:17:44.818        "data_offset": 0,
00:17:44.818        "data_size": 0
00:17:44.818      },
00:17:44.818      {
00:17:44.818        "name": "BaseBdev3",
00:17:44.818        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:44.818        "is_configured": false,
00:17:44.818        "data_offset": 0,
00:17:44.818        "data_size": 0
00:17:44.818      },
00:17:44.818      {
00:17:44.818        "name": "BaseBdev4",
00:17:44.818        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:44.818        "is_configured": false,
00:17:44.818        "data_offset": 0,
00:17:44.818        "data_size": 0
00:17:44.818      }
00:17:44.818    ]
00:17:44.818  }'
00:17:44.818   23:51:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:44.818   23:51:15	-- common/autotest_common.sh@10 -- # set +x
00:17:45.753   23:51:16	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:45.753  [2024-12-13 23:51:16.416393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:45.753  BaseBdev2
00:17:45.753   23:51:16	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:45.753   23:51:16	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:45.753   23:51:16	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:45.753   23:51:16	-- common/autotest_common.sh@899 -- # local i
00:17:45.753   23:51:16	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:45.753   23:51:16	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:45.753   23:51:16	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:46.012   23:51:16	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:46.270  [
00:17:46.270    {
00:17:46.270      "name": "BaseBdev2",
00:17:46.270      "aliases": [
00:17:46.270        "0373458f-fab7-4e5a-93a7-354a225d996f"
00:17:46.270      ],
00:17:46.270      "product_name": "Malloc disk",
00:17:46.270      "block_size": 512,
00:17:46.270      "num_blocks": 65536,
00:17:46.270      "uuid": "0373458f-fab7-4e5a-93a7-354a225d996f",
00:17:46.270      "assigned_rate_limits": {
00:17:46.270        "rw_ios_per_sec": 0,
00:17:46.270        "rw_mbytes_per_sec": 0,
00:17:46.270        "r_mbytes_per_sec": 0,
00:17:46.271        "w_mbytes_per_sec": 0
00:17:46.271      },
00:17:46.271      "claimed": true,
00:17:46.271      "claim_type": "exclusive_write",
00:17:46.271      "zoned": false,
00:17:46.271      "supported_io_types": {
00:17:46.271        "read": true,
00:17:46.271        "write": true,
00:17:46.271        "unmap": true,
00:17:46.271        "write_zeroes": true,
00:17:46.271        "flush": true,
00:17:46.271        "reset": true,
00:17:46.271        "compare": false,
00:17:46.271        "compare_and_write": false,
00:17:46.271        "abort": true,
00:17:46.271        "nvme_admin": false,
00:17:46.271        "nvme_io": false
00:17:46.271      },
00:17:46.271      "memory_domains": [
00:17:46.271        {
00:17:46.271          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:46.271          "dma_device_type": 2
00:17:46.271        }
00:17:46.271      ],
00:17:46.271      "driver_specific": {}
00:17:46.271    }
00:17:46.271  ]
00:17:46.271   23:51:16	-- common/autotest_common.sh@905 -- # return 0
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:46.271   23:51:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:46.271    23:51:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:46.271    23:51:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:46.529   23:51:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:46.529    "name": "Existed_Raid",
00:17:46.530    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.530    "strip_size_kb": 64,
00:17:46.530    "state": "configuring",
00:17:46.530    "raid_level": "concat",
00:17:46.530    "superblock": false,
00:17:46.530    "num_base_bdevs": 4,
00:17:46.530    "num_base_bdevs_discovered": 2,
00:17:46.530    "num_base_bdevs_operational": 4,
00:17:46.530    "base_bdevs_list": [
00:17:46.530      {
00:17:46.530        "name": "BaseBdev1",
00:17:46.530        "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:46.530        "is_configured": true,
00:17:46.530        "data_offset": 0,
00:17:46.530        "data_size": 65536
00:17:46.530      },
00:17:46.530      {
00:17:46.530        "name": "BaseBdev2",
00:17:46.530        "uuid": "0373458f-fab7-4e5a-93a7-354a225d996f",
00:17:46.530        "is_configured": true,
00:17:46.530        "data_offset": 0,
00:17:46.530        "data_size": 65536
00:17:46.530      },
00:17:46.530      {
00:17:46.530        "name": "BaseBdev3",
00:17:46.530        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.530        "is_configured": false,
00:17:46.530        "data_offset": 0,
00:17:46.530        "data_size": 0
00:17:46.530      },
00:17:46.530      {
00:17:46.530        "name": "BaseBdev4",
00:17:46.530        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.530        "is_configured": false,
00:17:46.530        "data_offset": 0,
00:17:46.530        "data_size": 0
00:17:46.530      }
00:17:46.530    ]
00:17:46.530  }'
00:17:46.530   23:51:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:46.530   23:51:17	-- common/autotest_common.sh@10 -- # set +x
00:17:47.099   23:51:17	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:47.401  [2024-12-13 23:51:17.988114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:47.401  BaseBdev3
00:17:47.401   23:51:17	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:47.401   23:51:18	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:47.401   23:51:18	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:47.401   23:51:18	-- common/autotest_common.sh@899 -- # local i
00:17:47.401   23:51:18	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:47.401   23:51:18	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:47.401   23:51:18	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:47.686   23:51:18	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:47.945  [
00:17:47.945    {
00:17:47.945      "name": "BaseBdev3",
00:17:47.945      "aliases": [
00:17:47.945        "7519635f-401c-41a5-9c72-60d61dcb2dbd"
00:17:47.945      ],
00:17:47.945      "product_name": "Malloc disk",
00:17:47.945      "block_size": 512,
00:17:47.945      "num_blocks": 65536,
00:17:47.945      "uuid": "7519635f-401c-41a5-9c72-60d61dcb2dbd",
00:17:47.945      "assigned_rate_limits": {
00:17:47.945        "rw_ios_per_sec": 0,
00:17:47.945        "rw_mbytes_per_sec": 0,
00:17:47.945        "r_mbytes_per_sec": 0,
00:17:47.945        "w_mbytes_per_sec": 0
00:17:47.945      },
00:17:47.945      "claimed": true,
00:17:47.945      "claim_type": "exclusive_write",
00:17:47.945      "zoned": false,
00:17:47.945      "supported_io_types": {
00:17:47.945        "read": true,
00:17:47.945        "write": true,
00:17:47.945        "unmap": true,
00:17:47.945        "write_zeroes": true,
00:17:47.945        "flush": true,
00:17:47.945        "reset": true,
00:17:47.945        "compare": false,
00:17:47.945        "compare_and_write": false,
00:17:47.945        "abort": true,
00:17:47.945        "nvme_admin": false,
00:17:47.945        "nvme_io": false
00:17:47.945      },
00:17:47.945      "memory_domains": [
00:17:47.945        {
00:17:47.945          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:47.945          "dma_device_type": 2
00:17:47.945        }
00:17:47.945      ],
00:17:47.945      "driver_specific": {}
00:17:47.945    }
00:17:47.945  ]
00:17:47.945   23:51:18	-- common/autotest_common.sh@905 -- # return 0
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:47.945    23:51:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:47.945    23:51:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:47.945   23:51:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:47.945    "name": "Existed_Raid",
00:17:47.945    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:47.945    "strip_size_kb": 64,
00:17:47.945    "state": "configuring",
00:17:47.945    "raid_level": "concat",
00:17:47.945    "superblock": false,
00:17:47.945    "num_base_bdevs": 4,
00:17:47.945    "num_base_bdevs_discovered": 3,
00:17:47.945    "num_base_bdevs_operational": 4,
00:17:47.945    "base_bdevs_list": [
00:17:47.945      {
00:17:47.945        "name": "BaseBdev1",
00:17:47.945        "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:47.945        "is_configured": true,
00:17:47.945        "data_offset": 0,
00:17:47.945        "data_size": 65536
00:17:47.945      },
00:17:47.945      {
00:17:47.945        "name": "BaseBdev2",
00:17:47.945        "uuid": "0373458f-fab7-4e5a-93a7-354a225d996f",
00:17:47.945        "is_configured": true,
00:17:47.945        "data_offset": 0,
00:17:47.945        "data_size": 65536
00:17:47.945      },
00:17:47.945      {
00:17:47.945        "name": "BaseBdev3",
00:17:47.945        "uuid": "7519635f-401c-41a5-9c72-60d61dcb2dbd",
00:17:47.945        "is_configured": true,
00:17:47.945        "data_offset": 0,
00:17:47.945        "data_size": 65536
00:17:47.945      },
00:17:47.945      {
00:17:47.945        "name": "BaseBdev4",
00:17:47.945        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:47.945        "is_configured": false,
00:17:47.945        "data_offset": 0,
00:17:47.945        "data_size": 0
00:17:47.946      }
00:17:47.946    ]
00:17:47.946  }'
00:17:47.946   23:51:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:47.946   23:51:18	-- common/autotest_common.sh@10 -- # set +x
00:17:48.881   23:51:19	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:48.881  [2024-12-13 23:51:19.531998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:48.881  [2024-12-13 23:51:19.532051] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:17:48.881  [2024-12-13 23:51:19.532059] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:17:48.881  [2024-12-13 23:51:19.532215] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:17:48.881  [2024-12-13 23:51:19.532569] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:17:48.881  [2024-12-13 23:51:19.532582] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:17:48.881  [2024-12-13 23:51:19.532823] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:48.881  BaseBdev4
00:17:48.881   23:51:19	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:48.881   23:51:19	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:48.881   23:51:19	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:48.881   23:51:19	-- common/autotest_common.sh@899 -- # local i
00:17:48.881   23:51:19	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:48.881   23:51:19	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:48.881   23:51:19	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:49.140   23:51:19	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:49.398  [
00:17:49.398    {
00:17:49.398      "name": "BaseBdev4",
00:17:49.398      "aliases": [
00:17:49.398        "c6bf9943-d68e-4126-9f68-7ab943afb1d3"
00:17:49.398      ],
00:17:49.398      "product_name": "Malloc disk",
00:17:49.398      "block_size": 512,
00:17:49.398      "num_blocks": 65536,
00:17:49.398      "uuid": "c6bf9943-d68e-4126-9f68-7ab943afb1d3",
00:17:49.398      "assigned_rate_limits": {
00:17:49.398        "rw_ios_per_sec": 0,
00:17:49.398        "rw_mbytes_per_sec": 0,
00:17:49.398        "r_mbytes_per_sec": 0,
00:17:49.398        "w_mbytes_per_sec": 0
00:17:49.398      },
00:17:49.398      "claimed": true,
00:17:49.398      "claim_type": "exclusive_write",
00:17:49.398      "zoned": false,
00:17:49.398      "supported_io_types": {
00:17:49.398        "read": true,
00:17:49.398        "write": true,
00:17:49.398        "unmap": true,
00:17:49.398        "write_zeroes": true,
00:17:49.398        "flush": true,
00:17:49.398        "reset": true,
00:17:49.398        "compare": false,
00:17:49.398        "compare_and_write": false,
00:17:49.398        "abort": true,
00:17:49.398        "nvme_admin": false,
00:17:49.398        "nvme_io": false
00:17:49.398      },
00:17:49.398      "memory_domains": [
00:17:49.398        {
00:17:49.398          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:49.398          "dma_device_type": 2
00:17:49.398        }
00:17:49.398      ],
00:17:49.398      "driver_specific": {}
00:17:49.398    }
00:17:49.398  ]
00:17:49.398   23:51:19	-- common/autotest_common.sh@905 -- # return 0
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:49.398   23:51:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:49.398    23:51:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:49.398    23:51:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:49.398   23:51:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:49.398    "name": "Existed_Raid",
00:17:49.398    "uuid": "49f2ba27-f774-4bf6-af8d-6beeca342ed2",
00:17:49.399    "strip_size_kb": 64,
00:17:49.399    "state": "online",
00:17:49.399    "raid_level": "concat",
00:17:49.399    "superblock": false,
00:17:49.399    "num_base_bdevs": 4,
00:17:49.399    "num_base_bdevs_discovered": 4,
00:17:49.399    "num_base_bdevs_operational": 4,
00:17:49.399    "base_bdevs_list": [
00:17:49.399      {
00:17:49.399        "name": "BaseBdev1",
00:17:49.399        "uuid": "5c453a4f-0077-4f5c-b820-435dccc14b4c",
00:17:49.399        "is_configured": true,
00:17:49.399        "data_offset": 0,
00:17:49.399        "data_size": 65536
00:17:49.399      },
00:17:49.399      {
00:17:49.399        "name": "BaseBdev2",
00:17:49.399        "uuid": "0373458f-fab7-4e5a-93a7-354a225d996f",
00:17:49.399        "is_configured": true,
00:17:49.399        "data_offset": 0,
00:17:49.399        "data_size": 65536
00:17:49.399      },
00:17:49.399      {
00:17:49.399        "name": "BaseBdev3",
00:17:49.399        "uuid": "7519635f-401c-41a5-9c72-60d61dcb2dbd",
00:17:49.399        "is_configured": true,
00:17:49.399        "data_offset": 0,
00:17:49.399        "data_size": 65536
00:17:49.399      },
00:17:49.399      {
00:17:49.399        "name": "BaseBdev4",
00:17:49.399        "uuid": "c6bf9943-d68e-4126-9f68-7ab943afb1d3",
00:17:49.399        "is_configured": true,
00:17:49.399        "data_offset": 0,
00:17:49.399        "data_size": 65536
00:17:49.399      }
00:17:49.399    ]
00:17:49.399  }'
00:17:49.399   23:51:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:49.399   23:51:20	-- common/autotest_common.sh@10 -- # set +x
00:17:49.966   23:51:20	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:50.224  [2024-12-13 23:51:20.895448] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:50.224  [2024-12-13 23:51:20.895480] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:50.224  [2024-12-13 23:51:20.895533] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:50.482   23:51:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:50.482    23:51:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:50.482    23:51:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:50.482   23:51:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:50.482    "name": "Existed_Raid",
00:17:50.482    "uuid": "49f2ba27-f774-4bf6-af8d-6beeca342ed2",
00:17:50.482    "strip_size_kb": 64,
00:17:50.482    "state": "offline",
00:17:50.482    "raid_level": "concat",
00:17:50.482    "superblock": false,
00:17:50.482    "num_base_bdevs": 4,
00:17:50.482    "num_base_bdevs_discovered": 3,
00:17:50.482    "num_base_bdevs_operational": 3,
00:17:50.482    "base_bdevs_list": [
00:17:50.482      {
00:17:50.482        "name": null,
00:17:50.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:50.482        "is_configured": false,
00:17:50.482        "data_offset": 0,
00:17:50.482        "data_size": 65536
00:17:50.482      },
00:17:50.482      {
00:17:50.482        "name": "BaseBdev2",
00:17:50.482        "uuid": "0373458f-fab7-4e5a-93a7-354a225d996f",
00:17:50.482        "is_configured": true,
00:17:50.482        "data_offset": 0,
00:17:50.482        "data_size": 65536
00:17:50.482      },
00:17:50.482      {
00:17:50.482        "name": "BaseBdev3",
00:17:50.482        "uuid": "7519635f-401c-41a5-9c72-60d61dcb2dbd",
00:17:50.483        "is_configured": true,
00:17:50.483        "data_offset": 0,
00:17:50.483        "data_size": 65536
00:17:50.483      },
00:17:50.483      {
00:17:50.483        "name": "BaseBdev4",
00:17:50.483        "uuid": "c6bf9943-d68e-4126-9f68-7ab943afb1d3",
00:17:50.483        "is_configured": true,
00:17:50.483        "data_offset": 0,
00:17:50.483        "data_size": 65536
00:17:50.483      }
00:17:50.483    ]
00:17:50.483  }'
00:17:50.483   23:51:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:50.483   23:51:21	-- common/autotest_common.sh@10 -- # set +x
00:17:51.049   23:51:21	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:51.049   23:51:21	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:51.049    23:51:21	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:51.049    23:51:21	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:51.308   23:51:21	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:51.308   23:51:21	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:51.308   23:51:21	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:51.566  [2024-12-13 23:51:22.234645] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:51.823   23:51:22	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:51.823   23:51:22	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:51.823    23:51:22	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:51.823    23:51:22	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:51.823   23:51:22	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:51.823   23:51:22	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:51.823   23:51:22	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:52.081  [2024-12-13 23:51:22.690026] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:52.081   23:51:22	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:52.081   23:51:22	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:52.081    23:51:22	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:52.081    23:51:22	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:52.339   23:51:22	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:52.339   23:51:22	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:52.339   23:51:22	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:52.598  [2024-12-13 23:51:23.137743] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:52.598  [2024-12-13 23:51:23.137913] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:17:52.598   23:51:23	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:52.598   23:51:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:52.598    23:51:23	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:52.598    23:51:23	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:52.856   23:51:23	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:52.856   23:51:23	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:52.856   23:51:23	-- bdev/bdev_raid.sh@287 -- # killprocess 119556
00:17:52.856   23:51:23	-- common/autotest_common.sh@936 -- # '[' -z 119556 ']'
00:17:52.856   23:51:23	-- common/autotest_common.sh@940 -- # kill -0 119556
00:17:52.856    23:51:23	-- common/autotest_common.sh@941 -- # uname
00:17:52.856   23:51:23	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:52.856    23:51:23	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119556
00:17:52.856   23:51:23	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:52.856   23:51:23	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:52.856   23:51:23	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119556'
00:17:52.856  killing process with pid 119556
00:17:52.856   23:51:23	-- common/autotest_common.sh@955 -- # kill 119556
00:17:52.856   23:51:23	-- common/autotest_common.sh@960 -- # wait 119556
00:17:52.856  [2024-12-13 23:51:23.431449] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:52.856  [2024-12-13 23:51:23.431570] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:53.791   23:51:24	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:53.792  
00:17:53.792  real	0m13.415s
00:17:53.792  user	0m23.601s
00:17:53.792  sys	0m1.756s
00:17:53.792   23:51:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:53.792   23:51:24	-- common/autotest_common.sh@10 -- # set +x
00:17:53.792  ************************************
00:17:53.792  END TEST raid_state_function_test
00:17:53.792  ************************************
00:17:53.792   23:51:24	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true
00:17:53.792   23:51:24	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:53.792   23:51:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:53.792   23:51:24	-- common/autotest_common.sh@10 -- # set +x
00:17:53.792  ************************************
00:17:53.792  START TEST raid_state_function_test_sb
00:17:53.792  ************************************
00:17:53.792   23:51:24	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true
00:17:53.792   23:51:24	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:17:53.792   23:51:24	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:53.792   23:51:24	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:17:53.792   23:51:24	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:53.792    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:54.051    23:51:24	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@226 -- # raid_pid=119982
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119982'
00:17:54.051  Process raid pid: 119982
00:17:54.051   23:51:24	-- bdev/bdev_raid.sh@228 -- # waitforlisten 119982 /var/tmp/spdk-raid.sock
00:17:54.051   23:51:24	-- common/autotest_common.sh@829 -- # '[' -z 119982 ']'
00:17:54.051   23:51:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:54.051   23:51:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:54.051   23:51:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:54.051  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:54.051   23:51:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:54.051   23:51:24	-- common/autotest_common.sh@10 -- # set +x
00:17:54.051  [2024-12-13 23:51:24.603716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:54.051  [2024-12-13 23:51:24.604157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:54.051  [2024-12-13 23:51:24.774506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:54.310  [2024-12-13 23:51:24.958760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:54.567  [2024-12-13 23:51:25.147950] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:54.825   23:51:25	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:54.825   23:51:25	-- common/autotest_common.sh@862 -- # return 0
00:17:54.825   23:51:25	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:55.083  [2024-12-13 23:51:25.760673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:55.083  [2024-12-13 23:51:25.761055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:55.083  [2024-12-13 23:51:25.761165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:55.083  [2024-12-13 23:51:25.761230] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:55.083  [2024-12-13 23:51:25.761333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:55.083  [2024-12-13 23:51:25.761412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:55.083  [2024-12-13 23:51:25.761445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:55.083  [2024-12-13 23:51:25.761619] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:55.083   23:51:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:55.083    23:51:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:55.083    23:51:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:55.342   23:51:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:55.342    "name": "Existed_Raid",
00:17:55.342    "uuid": "61a1bf95-b8b6-4d1b-8764-2510822c9d0c",
00:17:55.342    "strip_size_kb": 64,
00:17:55.342    "state": "configuring",
00:17:55.342    "raid_level": "concat",
00:17:55.342    "superblock": true,
00:17:55.342    "num_base_bdevs": 4,
00:17:55.342    "num_base_bdevs_discovered": 0,
00:17:55.342    "num_base_bdevs_operational": 4,
00:17:55.342    "base_bdevs_list": [
00:17:55.342      {
00:17:55.342        "name": "BaseBdev1",
00:17:55.342        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:55.342        "is_configured": false,
00:17:55.342        "data_offset": 0,
00:17:55.342        "data_size": 0
00:17:55.342      },
00:17:55.342      {
00:17:55.342        "name": "BaseBdev2",
00:17:55.342        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:55.342        "is_configured": false,
00:17:55.342        "data_offset": 0,
00:17:55.342        "data_size": 0
00:17:55.342      },
00:17:55.342      {
00:17:55.342        "name": "BaseBdev3",
00:17:55.342        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:55.342        "is_configured": false,
00:17:55.342        "data_offset": 0,
00:17:55.342        "data_size": 0
00:17:55.342      },
00:17:55.342      {
00:17:55.342        "name": "BaseBdev4",
00:17:55.342        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:55.342        "is_configured": false,
00:17:55.342        "data_offset": 0,
00:17:55.342        "data_size": 0
00:17:55.342      }
00:17:55.342    ]
00:17:55.342  }'
00:17:55.342   23:51:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:55.342   23:51:25	-- common/autotest_common.sh@10 -- # set +x
00:17:55.909   23:51:26	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:56.168  [2024-12-13 23:51:26.708662] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:56.168  [2024-12-13 23:51:26.708820] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:17:56.168   23:51:26	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:56.426  [2024-12-13 23:51:26.952757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:56.426  [2024-12-13 23:51:26.952944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:56.426  [2024-12-13 23:51:26.953074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:56.426  [2024-12-13 23:51:26.953143] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:56.426  [2024-12-13 23:51:26.953292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:56.426  [2024-12-13 23:51:26.953371] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:56.426  [2024-12-13 23:51:26.953666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:56.426  [2024-12-13 23:51:26.953739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:56.426   23:51:26	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:56.684  [2024-12-13 23:51:27.171325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:56.684  BaseBdev1
00:17:56.684   23:51:27	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:56.684   23:51:27	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:56.684   23:51:27	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:56.684   23:51:27	-- common/autotest_common.sh@899 -- # local i
00:17:56.684   23:51:27	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:56.684   23:51:27	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:56.684   23:51:27	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:56.943   23:51:27	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:56.943  [
00:17:56.943    {
00:17:56.943      "name": "BaseBdev1",
00:17:56.943      "aliases": [
00:17:56.943        "3f8d2dea-61f8-4b0f-a14f-0c4dad58d916"
00:17:56.943      ],
00:17:56.943      "product_name": "Malloc disk",
00:17:56.943      "block_size": 512,
00:17:56.943      "num_blocks": 65536,
00:17:56.943      "uuid": "3f8d2dea-61f8-4b0f-a14f-0c4dad58d916",
00:17:56.943      "assigned_rate_limits": {
00:17:56.943        "rw_ios_per_sec": 0,
00:17:56.943        "rw_mbytes_per_sec": 0,
00:17:56.943        "r_mbytes_per_sec": 0,
00:17:56.943        "w_mbytes_per_sec": 0
00:17:56.943      },
00:17:56.943      "claimed": true,
00:17:56.943      "claim_type": "exclusive_write",
00:17:56.943      "zoned": false,
00:17:56.943      "supported_io_types": {
00:17:56.943        "read": true,
00:17:56.943        "write": true,
00:17:56.943        "unmap": true,
00:17:56.943        "write_zeroes": true,
00:17:56.943        "flush": true,
00:17:56.943        "reset": true,
00:17:56.943        "compare": false,
00:17:56.943        "compare_and_write": false,
00:17:56.943        "abort": true,
00:17:56.943        "nvme_admin": false,
00:17:56.943        "nvme_io": false
00:17:56.943      },
00:17:56.943      "memory_domains": [
00:17:56.943        {
00:17:56.943          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:56.943          "dma_device_type": 2
00:17:56.943        }
00:17:56.943      ],
00:17:56.943      "driver_specific": {}
00:17:56.943    }
00:17:56.943  ]
00:17:56.943   23:51:27	-- common/autotest_common.sh@905 -- # return 0
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:56.943   23:51:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:56.943    23:51:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:56.943    23:51:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:57.202   23:51:27	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:57.202    "name": "Existed_Raid",
00:17:57.202    "uuid": "79c77a40-785f-4568-82d6-7eef96de9fd3",
00:17:57.202    "strip_size_kb": 64,
00:17:57.202    "state": "configuring",
00:17:57.202    "raid_level": "concat",
00:17:57.202    "superblock": true,
00:17:57.202    "num_base_bdevs": 4,
00:17:57.202    "num_base_bdevs_discovered": 1,
00:17:57.202    "num_base_bdevs_operational": 4,
00:17:57.202    "base_bdevs_list": [
00:17:57.202      {
00:17:57.202        "name": "BaseBdev1",
00:17:57.202        "uuid": "3f8d2dea-61f8-4b0f-a14f-0c4dad58d916",
00:17:57.202        "is_configured": true,
00:17:57.202        "data_offset": 2048,
00:17:57.202        "data_size": 63488
00:17:57.202      },
00:17:57.202      {
00:17:57.202        "name": "BaseBdev2",
00:17:57.202        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.202        "is_configured": false,
00:17:57.202        "data_offset": 0,
00:17:57.202        "data_size": 0
00:17:57.202      },
00:17:57.202      {
00:17:57.202        "name": "BaseBdev3",
00:17:57.202        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.202        "is_configured": false,
00:17:57.202        "data_offset": 0,
00:17:57.202        "data_size": 0
00:17:57.202      },
00:17:57.202      {
00:17:57.202        "name": "BaseBdev4",
00:17:57.202        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.202        "is_configured": false,
00:17:57.202        "data_offset": 0,
00:17:57.202        "data_size": 0
00:17:57.202      }
00:17:57.202    ]
00:17:57.202  }'
00:17:57.202   23:51:27	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:57.202   23:51:27	-- common/autotest_common.sh@10 -- # set +x
00:17:57.771   23:51:28	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:58.029  [2024-12-13 23:51:28.587569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:58.030  [2024-12-13 23:51:28.587745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:17:58.030   23:51:28	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:17:58.030   23:51:28	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:58.289   23:51:28	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:58.547  BaseBdev1
00:17:58.547   23:51:29	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:17:58.547   23:51:29	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:58.547   23:51:29	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:58.547   23:51:29	-- common/autotest_common.sh@899 -- # local i
00:17:58.547   23:51:29	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:58.547   23:51:29	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:58.547   23:51:29	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:58.806   23:51:29	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:58.806  [
00:17:58.806    {
00:17:58.806      "name": "BaseBdev1",
00:17:58.806      "aliases": [
00:17:58.806        "30303ce0-3509-448e-8fb2-d210a50aab8c"
00:17:58.806      ],
00:17:58.806      "product_name": "Malloc disk",
00:17:58.806      "block_size": 512,
00:17:58.806      "num_blocks": 65536,
00:17:58.806      "uuid": "30303ce0-3509-448e-8fb2-d210a50aab8c",
00:17:58.806      "assigned_rate_limits": {
00:17:58.806        "rw_ios_per_sec": 0,
00:17:58.806        "rw_mbytes_per_sec": 0,
00:17:58.806        "r_mbytes_per_sec": 0,
00:17:58.806        "w_mbytes_per_sec": 0
00:17:58.806      },
00:17:58.806      "claimed": false,
00:17:58.806      "zoned": false,
00:17:58.806      "supported_io_types": {
00:17:58.806        "read": true,
00:17:58.806        "write": true,
00:17:58.806        "unmap": true,
00:17:58.806        "write_zeroes": true,
00:17:58.806        "flush": true,
00:17:58.806        "reset": true,
00:17:58.806        "compare": false,
00:17:58.806        "compare_and_write": false,
00:17:58.806        "abort": true,
00:17:58.806        "nvme_admin": false,
00:17:58.806        "nvme_io": false
00:17:58.806      },
00:17:58.806      "memory_domains": [
00:17:58.806        {
00:17:58.806          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:58.806          "dma_device_type": 2
00:17:58.806        }
00:17:58.806      ],
00:17:58.806      "driver_specific": {}
00:17:58.806    }
00:17:58.806  ]
00:17:58.806   23:51:29	-- common/autotest_common.sh@905 -- # return 0
00:17:58.806   23:51:29	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:59.064  [2024-12-13 23:51:29.760137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:59.065  [2024-12-13 23:51:29.762196] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:59.065  [2024-12-13 23:51:29.762393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:59.065  [2024-12-13 23:51:29.762523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:59.065  [2024-12-13 23:51:29.762654] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:59.065  [2024-12-13 23:51:29.762749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:59.065  [2024-12-13 23:51:29.762862] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:59.065   23:51:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:59.065    23:51:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:59.065    23:51:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:59.323   23:51:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:59.323    "name": "Existed_Raid",
00:17:59.323    "uuid": "9eb52059-d12b-444f-94bf-a69ec6bf8207",
00:17:59.323    "strip_size_kb": 64,
00:17:59.323    "state": "configuring",
00:17:59.323    "raid_level": "concat",
00:17:59.323    "superblock": true,
00:17:59.323    "num_base_bdevs": 4,
00:17:59.323    "num_base_bdevs_discovered": 1,
00:17:59.323    "num_base_bdevs_operational": 4,
00:17:59.323    "base_bdevs_list": [
00:17:59.323      {
00:17:59.323        "name": "BaseBdev1",
00:17:59.324        "uuid": "30303ce0-3509-448e-8fb2-d210a50aab8c",
00:17:59.324        "is_configured": true,
00:17:59.324        "data_offset": 2048,
00:17:59.324        "data_size": 63488
00:17:59.324      },
00:17:59.324      {
00:17:59.324        "name": "BaseBdev2",
00:17:59.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.324        "is_configured": false,
00:17:59.324        "data_offset": 0,
00:17:59.324        "data_size": 0
00:17:59.324      },
00:17:59.324      {
00:17:59.324        "name": "BaseBdev3",
00:17:59.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.324        "is_configured": false,
00:17:59.324        "data_offset": 0,
00:17:59.324        "data_size": 0
00:17:59.324      },
00:17:59.324      {
00:17:59.324        "name": "BaseBdev4",
00:17:59.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.324        "is_configured": false,
00:17:59.324        "data_offset": 0,
00:17:59.324        "data_size": 0
00:17:59.324      }
00:17:59.324    ]
00:17:59.324  }'
00:17:59.324   23:51:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:59.324   23:51:30	-- common/autotest_common.sh@10 -- # set +x
00:17:59.889   23:51:30	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:18:00.147  [2024-12-13 23:51:30.797398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:00.147  BaseBdev2
00:18:00.147   23:51:30	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:18:00.147   23:51:30	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:18:00.147   23:51:30	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:00.147   23:51:30	-- common/autotest_common.sh@899 -- # local i
00:18:00.147   23:51:30	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:00.147   23:51:30	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:00.147   23:51:30	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:00.403   23:51:31	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:18:00.661  [
00:18:00.661    {
00:18:00.661      "name": "BaseBdev2",
00:18:00.661      "aliases": [
00:18:00.661        "ff20b427-abba-498a-b4be-1e6cd19b3885"
00:18:00.661      ],
00:18:00.661      "product_name": "Malloc disk",
00:18:00.661      "block_size": 512,
00:18:00.661      "num_blocks": 65536,
00:18:00.661      "uuid": "ff20b427-abba-498a-b4be-1e6cd19b3885",
00:18:00.661      "assigned_rate_limits": {
00:18:00.661        "rw_ios_per_sec": 0,
00:18:00.661        "rw_mbytes_per_sec": 0,
00:18:00.661        "r_mbytes_per_sec": 0,
00:18:00.661        "w_mbytes_per_sec": 0
00:18:00.661      },
00:18:00.661      "claimed": true,
00:18:00.661      "claim_type": "exclusive_write",
00:18:00.661      "zoned": false,
00:18:00.661      "supported_io_types": {
00:18:00.661        "read": true,
00:18:00.661        "write": true,
00:18:00.661        "unmap": true,
00:18:00.661        "write_zeroes": true,
00:18:00.661        "flush": true,
00:18:00.661        "reset": true,
00:18:00.661        "compare": false,
00:18:00.661        "compare_and_write": false,
00:18:00.661        "abort": true,
00:18:00.661        "nvme_admin": false,
00:18:00.661        "nvme_io": false
00:18:00.661      },
00:18:00.661      "memory_domains": [
00:18:00.661        {
00:18:00.661          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:00.661          "dma_device_type": 2
00:18:00.661        }
00:18:00.661      ],
00:18:00.661      "driver_specific": {}
00:18:00.661    }
00:18:00.661  ]
00:18:00.661   23:51:31	-- common/autotest_common.sh@905 -- # return 0
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:00.661   23:51:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:00.661    23:51:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:00.661    23:51:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:00.920   23:51:31	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:00.920    "name": "Existed_Raid",
00:18:00.920    "uuid": "9eb52059-d12b-444f-94bf-a69ec6bf8207",
00:18:00.920    "strip_size_kb": 64,
00:18:00.920    "state": "configuring",
00:18:00.920    "raid_level": "concat",
00:18:00.920    "superblock": true,
00:18:00.920    "num_base_bdevs": 4,
00:18:00.920    "num_base_bdevs_discovered": 2,
00:18:00.920    "num_base_bdevs_operational": 4,
00:18:00.920    "base_bdevs_list": [
00:18:00.920      {
00:18:00.920        "name": "BaseBdev1",
00:18:00.920        "uuid": "30303ce0-3509-448e-8fb2-d210a50aab8c",
00:18:00.920        "is_configured": true,
00:18:00.920        "data_offset": 2048,
00:18:00.920        "data_size": 63488
00:18:00.920      },
00:18:00.920      {
00:18:00.920        "name": "BaseBdev2",
00:18:00.920        "uuid": "ff20b427-abba-498a-b4be-1e6cd19b3885",
00:18:00.920        "is_configured": true,
00:18:00.920        "data_offset": 2048,
00:18:00.920        "data_size": 63488
00:18:00.920      },
00:18:00.920      {
00:18:00.920        "name": "BaseBdev3",
00:18:00.920        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:00.920        "is_configured": false,
00:18:00.920        "data_offset": 0,
00:18:00.920        "data_size": 0
00:18:00.920      },
00:18:00.920      {
00:18:00.920        "name": "BaseBdev4",
00:18:00.920        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:00.920        "is_configured": false,
00:18:00.920        "data_offset": 0,
00:18:00.920        "data_size": 0
00:18:00.920      }
00:18:00.920    ]
00:18:00.920  }'
00:18:00.920   23:51:31	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:00.920   23:51:31	-- common/autotest_common.sh@10 -- # set +x
00:18:01.486   23:51:32	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:18:01.744  [2024-12-13 23:51:32.337062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:18:01.744  BaseBdev3
00:18:01.744   23:51:32	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:18:01.744   23:51:32	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:18:01.744   23:51:32	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:01.744   23:51:32	-- common/autotest_common.sh@899 -- # local i
00:18:01.744   23:51:32	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:01.744   23:51:32	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:01.744   23:51:32	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:02.002   23:51:32	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:18:02.260  [
00:18:02.260    {
00:18:02.260      "name": "BaseBdev3",
00:18:02.260      "aliases": [
00:18:02.260        "4e1219e7-f863-4b2e-9671-ea50943c30a7"
00:18:02.260      ],
00:18:02.260      "product_name": "Malloc disk",
00:18:02.260      "block_size": 512,
00:18:02.260      "num_blocks": 65536,
00:18:02.260      "uuid": "4e1219e7-f863-4b2e-9671-ea50943c30a7",
00:18:02.260      "assigned_rate_limits": {
00:18:02.260        "rw_ios_per_sec": 0,
00:18:02.260        "rw_mbytes_per_sec": 0,
00:18:02.260        "r_mbytes_per_sec": 0,
00:18:02.260        "w_mbytes_per_sec": 0
00:18:02.260      },
00:18:02.260      "claimed": true,
00:18:02.260      "claim_type": "exclusive_write",
00:18:02.260      "zoned": false,
00:18:02.260      "supported_io_types": {
00:18:02.260        "read": true,
00:18:02.260        "write": true,
00:18:02.260        "unmap": true,
00:18:02.260        "write_zeroes": true,
00:18:02.260        "flush": true,
00:18:02.260        "reset": true,
00:18:02.260        "compare": false,
00:18:02.260        "compare_and_write": false,
00:18:02.260        "abort": true,
00:18:02.260        "nvme_admin": false,
00:18:02.260        "nvme_io": false
00:18:02.260      },
00:18:02.260      "memory_domains": [
00:18:02.260        {
00:18:02.260          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:02.260          "dma_device_type": 2
00:18:02.260        }
00:18:02.260      ],
00:18:02.260      "driver_specific": {}
00:18:02.260    }
00:18:02.260  ]
00:18:02.260   23:51:32	-- common/autotest_common.sh@905 -- # return 0
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:02.260   23:51:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:02.261    23:51:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:02.261    23:51:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:02.519   23:51:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:02.519    "name": "Existed_Raid",
00:18:02.519    "uuid": "9eb52059-d12b-444f-94bf-a69ec6bf8207",
00:18:02.519    "strip_size_kb": 64,
00:18:02.519    "state": "configuring",
00:18:02.519    "raid_level": "concat",
00:18:02.519    "superblock": true,
00:18:02.519    "num_base_bdevs": 4,
00:18:02.519    "num_base_bdevs_discovered": 3,
00:18:02.519    "num_base_bdevs_operational": 4,
00:18:02.519    "base_bdevs_list": [
00:18:02.519      {
00:18:02.519        "name": "BaseBdev1",
00:18:02.519        "uuid": "30303ce0-3509-448e-8fb2-d210a50aab8c",
00:18:02.519        "is_configured": true,
00:18:02.519        "data_offset": 2048,
00:18:02.519        "data_size": 63488
00:18:02.519      },
00:18:02.519      {
00:18:02.519        "name": "BaseBdev2",
00:18:02.519        "uuid": "ff20b427-abba-498a-b4be-1e6cd19b3885",
00:18:02.519        "is_configured": true,
00:18:02.519        "data_offset": 2048,
00:18:02.519        "data_size": 63488
00:18:02.519      },
00:18:02.519      {
00:18:02.519        "name": "BaseBdev3",
00:18:02.519        "uuid": "4e1219e7-f863-4b2e-9671-ea50943c30a7",
00:18:02.519        "is_configured": true,
00:18:02.519        "data_offset": 2048,
00:18:02.519        "data_size": 63488
00:18:02.519      },
00:18:02.519      {
00:18:02.519        "name": "BaseBdev4",
00:18:02.519        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:02.519        "is_configured": false,
00:18:02.519        "data_offset": 0,
00:18:02.519        "data_size": 0
00:18:02.519      }
00:18:02.519    ]
00:18:02.519  }'
00:18:02.519   23:51:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:02.519   23:51:33	-- common/autotest_common.sh@10 -- # set +x
00:18:03.085   23:51:33	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:18:03.085  [2024-12-13 23:51:33.800938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:18:03.085  [2024-12-13 23:51:33.802748] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:18:03.085  [2024-12-13 23:51:33.803033] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:18:03.085  [2024-12-13 23:51:33.803453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:18:03.085  BaseBdev4
00:18:03.085  [2024-12-13 23:51:33.804596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:18:03.085  [2024-12-13 23:51:33.804854] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:18:03.085  [2024-12-13 23:51:33.805455] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:03.085   23:51:33	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:18:03.085   23:51:33	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:18:03.085   23:51:33	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:03.085   23:51:33	-- common/autotest_common.sh@899 -- # local i
00:18:03.085   23:51:33	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:03.085   23:51:33	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:03.085   23:51:33	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:03.343   23:51:34	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:18:03.601  [
00:18:03.601    {
00:18:03.601      "name": "BaseBdev4",
00:18:03.601      "aliases": [
00:18:03.601        "49136ea2-192a-44c4-8cfc-6b0c436ff55c"
00:18:03.601      ],
00:18:03.601      "product_name": "Malloc disk",
00:18:03.601      "block_size": 512,
00:18:03.601      "num_blocks": 65536,
00:18:03.601      "uuid": "49136ea2-192a-44c4-8cfc-6b0c436ff55c",
00:18:03.601      "assigned_rate_limits": {
00:18:03.601        "rw_ios_per_sec": 0,
00:18:03.601        "rw_mbytes_per_sec": 0,
00:18:03.601        "r_mbytes_per_sec": 0,
00:18:03.601        "w_mbytes_per_sec": 0
00:18:03.601      },
00:18:03.601      "claimed": true,
00:18:03.601      "claim_type": "exclusive_write",
00:18:03.601      "zoned": false,
00:18:03.601      "supported_io_types": {
00:18:03.601        "read": true,
00:18:03.601        "write": true,
00:18:03.601        "unmap": true,
00:18:03.601        "write_zeroes": true,
00:18:03.601        "flush": true,
00:18:03.601        "reset": true,
00:18:03.601        "compare": false,
00:18:03.601        "compare_and_write": false,
00:18:03.601        "abort": true,
00:18:03.601        "nvme_admin": false,
00:18:03.601        "nvme_io": false
00:18:03.601      },
00:18:03.601      "memory_domains": [
00:18:03.601        {
00:18:03.601          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:03.601          "dma_device_type": 2
00:18:03.601        }
00:18:03.601      ],
00:18:03.601      "driver_specific": {}
00:18:03.601    }
00:18:03.601  ]
00:18:03.601   23:51:34	-- common/autotest_common.sh@905 -- # return 0
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:03.602   23:51:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:03.602    23:51:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:03.602    23:51:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:03.860   23:51:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:03.860    "name": "Existed_Raid",
00:18:03.860    "uuid": "9eb52059-d12b-444f-94bf-a69ec6bf8207",
00:18:03.860    "strip_size_kb": 64,
00:18:03.860    "state": "online",
00:18:03.860    "raid_level": "concat",
00:18:03.860    "superblock": true,
00:18:03.860    "num_base_bdevs": 4,
00:18:03.860    "num_base_bdevs_discovered": 4,
00:18:03.860    "num_base_bdevs_operational": 4,
00:18:03.860    "base_bdevs_list": [
00:18:03.860      {
00:18:03.860        "name": "BaseBdev1",
00:18:03.860        "uuid": "30303ce0-3509-448e-8fb2-d210a50aab8c",
00:18:03.860        "is_configured": true,
00:18:03.860        "data_offset": 2048,
00:18:03.860        "data_size": 63488
00:18:03.860      },
00:18:03.860      {
00:18:03.860        "name": "BaseBdev2",
00:18:03.860        "uuid": "ff20b427-abba-498a-b4be-1e6cd19b3885",
00:18:03.860        "is_configured": true,
00:18:03.860        "data_offset": 2048,
00:18:03.860        "data_size": 63488
00:18:03.860      },
00:18:03.860      {
00:18:03.860        "name": "BaseBdev3",
00:18:03.860        "uuid": "4e1219e7-f863-4b2e-9671-ea50943c30a7",
00:18:03.860        "is_configured": true,
00:18:03.860        "data_offset": 2048,
00:18:03.860        "data_size": 63488
00:18:03.860      },
00:18:03.860      {
00:18:03.860        "name": "BaseBdev4",
00:18:03.860        "uuid": "49136ea2-192a-44c4-8cfc-6b0c436ff55c",
00:18:03.860        "is_configured": true,
00:18:03.860        "data_offset": 2048,
00:18:03.860        "data_size": 63488
00:18:03.860      }
00:18:03.860    ]
00:18:03.860  }'
00:18:03.860   23:51:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:03.860   23:51:34	-- common/autotest_common.sh@10 -- # set +x
00:18:04.427   23:51:35	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:04.685  [2024-12-13 23:51:35.337503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:04.685  [2024-12-13 23:51:35.337538] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:04.685  [2024-12-13 23:51:35.337633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@197 -- # return 1
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:04.944    23:51:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:04.944    23:51:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:04.944    "name": "Existed_Raid",
00:18:04.944    "uuid": "9eb52059-d12b-444f-94bf-a69ec6bf8207",
00:18:04.944    "strip_size_kb": 64,
00:18:04.944    "state": "offline",
00:18:04.944    "raid_level": "concat",
00:18:04.944    "superblock": true,
00:18:04.944    "num_base_bdevs": 4,
00:18:04.944    "num_base_bdevs_discovered": 3,
00:18:04.944    "num_base_bdevs_operational": 3,
00:18:04.944    "base_bdevs_list": [
00:18:04.944      {
00:18:04.944        "name": null,
00:18:04.944        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:04.944        "is_configured": false,
00:18:04.944        "data_offset": 2048,
00:18:04.944        "data_size": 63488
00:18:04.944      },
00:18:04.944      {
00:18:04.944        "name": "BaseBdev2",
00:18:04.944        "uuid": "ff20b427-abba-498a-b4be-1e6cd19b3885",
00:18:04.944        "is_configured": true,
00:18:04.944        "data_offset": 2048,
00:18:04.944        "data_size": 63488
00:18:04.944      },
00:18:04.944      {
00:18:04.944        "name": "BaseBdev3",
00:18:04.944        "uuid": "4e1219e7-f863-4b2e-9671-ea50943c30a7",
00:18:04.944        "is_configured": true,
00:18:04.944        "data_offset": 2048,
00:18:04.944        "data_size": 63488
00:18:04.944      },
00:18:04.944      {
00:18:04.944        "name": "BaseBdev4",
00:18:04.944        "uuid": "49136ea2-192a-44c4-8cfc-6b0c436ff55c",
00:18:04.944        "is_configured": true,
00:18:04.944        "data_offset": 2048,
00:18:04.944        "data_size": 63488
00:18:04.944      }
00:18:04.944    ]
00:18:04.944  }'
00:18:04.944   23:51:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:04.944   23:51:35	-- common/autotest_common.sh@10 -- # set +x
00:18:05.918   23:51:36	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:18:05.918   23:51:36	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:05.918    23:51:36	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:05.918    23:51:36	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:05.918   23:51:36	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:05.918   23:51:36	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:05.918   23:51:36	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:18:06.177  [2024-12-13 23:51:36.812337] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:18:06.177   23:51:36	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:06.177   23:51:36	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:06.177    23:51:36	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:06.177    23:51:36	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:06.436   23:51:37	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:06.436   23:51:37	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:06.436   23:51:37	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:18:06.694  [2024-12-13 23:51:37.250892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:18:06.694   23:51:37	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:06.694   23:51:37	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:06.694    23:51:37	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:06.694    23:51:37	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:06.953   23:51:37	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:06.953   23:51:37	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:06.953   23:51:37	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:18:07.211  [2024-12-13 23:51:37.745563] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:18:07.211  [2024-12-13 23:51:37.745630] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:18:07.211   23:51:37	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:07.211   23:51:37	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:07.211    23:51:37	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:07.211    23:51:37	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:18:07.470   23:51:38	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:18:07.470   23:51:38	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:18:07.470   23:51:38	-- bdev/bdev_raid.sh@287 -- # killprocess 119982
00:18:07.470   23:51:38	-- common/autotest_common.sh@936 -- # '[' -z 119982 ']'
00:18:07.470   23:51:38	-- common/autotest_common.sh@940 -- # kill -0 119982
00:18:07.470    23:51:38	-- common/autotest_common.sh@941 -- # uname
00:18:07.470   23:51:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:07.470    23:51:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119982
00:18:07.470   23:51:38	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:07.470   23:51:38	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:07.470   23:51:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119982'
00:18:07.470  killing process with pid 119982
00:18:07.470   23:51:38	-- common/autotest_common.sh@955 -- # kill 119982
00:18:07.470  [2024-12-13 23:51:38.089295] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:07.470   23:51:38	-- common/autotest_common.sh@960 -- # wait 119982
00:18:07.470  [2024-12-13 23:51:38.089412] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:08.406  ************************************
00:18:08.406  END TEST raid_state_function_test_sb
00:18:08.406  ************************************
00:18:08.406   23:51:39	-- bdev/bdev_raid.sh@289 -- # return 0
00:18:08.406  
00:18:08.406  real	0m14.590s
00:18:08.406  user	0m25.908s
00:18:08.406  sys	0m1.726s
00:18:08.406   23:51:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:08.406   23:51:39	-- common/autotest_common.sh@10 -- # set +x
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4
00:18:08.664   23:51:39	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:18:08.664   23:51:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:08.664   23:51:39	-- common/autotest_common.sh@10 -- # set +x
00:18:08.664  ************************************
00:18:08.664  START TEST raid_superblock_test
00:18:08.664  ************************************
00:18:08.664   23:51:39	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@357 -- # raid_pid=120428
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:18:08.664   23:51:39	-- bdev/bdev_raid.sh@358 -- # waitforlisten 120428 /var/tmp/spdk-raid.sock
00:18:08.664   23:51:39	-- common/autotest_common.sh@829 -- # '[' -z 120428 ']'
00:18:08.664   23:51:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:08.664  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:08.664   23:51:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:08.664   23:51:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:08.664   23:51:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:08.664   23:51:39	-- common/autotest_common.sh@10 -- # set +x
00:18:08.664  [2024-12-13 23:51:39.232608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:08.664  [2024-12-13 23:51:39.232770] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120428 ]
00:18:08.665  [2024-12-13 23:51:39.387161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:08.923  [2024-12-13 23:51:39.573574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:09.182  [2024-12-13 23:51:39.757422] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:09.749   23:51:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:09.750   23:51:40	-- common/autotest_common.sh@862 -- # return 0
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:18:09.750  malloc1
00:18:09.750   23:51:40	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:10.008  [2024-12-13 23:51:40.602790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:10.008  [2024-12-13 23:51:40.602883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:10.008  [2024-12-13 23:51:40.602918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:18:10.009  [2024-12-13 23:51:40.602965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:10.009  [2024-12-13 23:51:40.605177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:10.009  [2024-12-13 23:51:40.605223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:10.009  pt1
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:10.009   23:51:40	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:18:10.267  malloc2
00:18:10.267   23:51:40	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:10.526  [2024-12-13 23:51:41.020778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:10.526  [2024-12-13 23:51:41.020841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:10.526  [2024-12-13 23:51:41.020883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:18:10.526  [2024-12-13 23:51:41.020936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:10.526  [2024-12-13 23:51:41.023093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:10.526  [2024-12-13 23:51:41.023140] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:10.526  pt2
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:10.526   23:51:41	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:18:10.785  malloc3
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:10.785  [2024-12-13 23:51:41.473799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:10.785  [2024-12-13 23:51:41.473870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:10.785  [2024-12-13 23:51:41.473915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:18:10.785  [2024-12-13 23:51:41.473959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:10.785  [2024-12-13 23:51:41.476127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:10.785  [2024-12-13 23:51:41.476179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:10.785  pt3
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:10.785   23:51:41	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:18:11.044  malloc4
00:18:11.044   23:51:41	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:11.303  [2024-12-13 23:51:41.862371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:11.303  [2024-12-13 23:51:41.862438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:11.303  [2024-12-13 23:51:41.862478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:18:11.303  [2024-12-13 23:51:41.862521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:11.303  [2024-12-13 23:51:41.864717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:11.303  [2024-12-13 23:51:41.864768] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:11.303  pt4
00:18:11.303   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:11.303   23:51:41	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:11.303   23:51:41	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:18:11.562  [2024-12-13 23:51:42.042472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:11.562  [2024-12-13 23:51:42.044354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:11.562  [2024-12-13 23:51:42.044432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:11.562  [2024-12-13 23:51:42.044509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:11.562  [2024-12-13 23:51:42.044711] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380
00:18:11.562  [2024-12-13 23:51:42.044725] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:18:11.562  [2024-12-13 23:51:42.044826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:18:11.562  [2024-12-13 23:51:42.045155] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380
00:18:11.562  [2024-12-13 23:51:42.045168] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380
00:18:11.562  [2024-12-13 23:51:42.045299] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:11.562    23:51:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:11.562    23:51:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:11.562    "name": "raid_bdev1",
00:18:11.562    "uuid": "144b758d-0e2c-426e-8b1c-111b3d30a21d",
00:18:11.562    "strip_size_kb": 64,
00:18:11.562    "state": "online",
00:18:11.562    "raid_level": "concat",
00:18:11.562    "superblock": true,
00:18:11.562    "num_base_bdevs": 4,
00:18:11.562    "num_base_bdevs_discovered": 4,
00:18:11.562    "num_base_bdevs_operational": 4,
00:18:11.562    "base_bdevs_list": [
00:18:11.562      {
00:18:11.562        "name": "pt1",
00:18:11.562        "uuid": "18fd99ce-9d14-5c13-9ae4-259a2b41e2a8",
00:18:11.562        "is_configured": true,
00:18:11.562        "data_offset": 2048,
00:18:11.562        "data_size": 63488
00:18:11.562      },
00:18:11.562      {
00:18:11.562        "name": "pt2",
00:18:11.562        "uuid": "54f4617e-d014-5b35-9dd6-81d5aeea46f4",
00:18:11.562        "is_configured": true,
00:18:11.562        "data_offset": 2048,
00:18:11.562        "data_size": 63488
00:18:11.562      },
00:18:11.562      {
00:18:11.562        "name": "pt3",
00:18:11.562        "uuid": "a38f1a47-d342-5fb3-af72-470786c28a48",
00:18:11.562        "is_configured": true,
00:18:11.562        "data_offset": 2048,
00:18:11.562        "data_size": 63488
00:18:11.562      },
00:18:11.562      {
00:18:11.562        "name": "pt4",
00:18:11.562        "uuid": "5b9ede48-ec31-5714-84b6-7ec2ea04652f",
00:18:11.562        "is_configured": true,
00:18:11.562        "data_offset": 2048,
00:18:11.562        "data_size": 63488
00:18:11.562      }
00:18:11.562    ]
00:18:11.562  }'
00:18:11.562   23:51:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:11.562   23:51:42	-- common/autotest_common.sh@10 -- # set +x
00:18:12.130    23:51:42	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:12.130    23:51:42	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:18:12.389  [2024-12-13 23:51:43.094748] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:12.389   23:51:43	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=144b758d-0e2c-426e-8b1c-111b3d30a21d
00:18:12.389   23:51:43	-- bdev/bdev_raid.sh@380 -- # '[' -z 144b758d-0e2c-426e-8b1c-111b3d30a21d ']'
00:18:12.389   23:51:43	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:12.647  [2024-12-13 23:51:43.274588] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:12.647  [2024-12-13 23:51:43.274609] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:12.647  [2024-12-13 23:51:43.274676] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:12.647  [2024-12-13 23:51:43.274742] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:12.647  [2024-12-13 23:51:43.274753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline
00:18:12.647    23:51:43	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:12.647    23:51:43	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:18:12.906   23:51:43	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:18:12.906   23:51:43	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:18:12.906   23:51:43	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:12.906   23:51:43	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:18:13.165   23:51:43	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:13.165   23:51:43	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:13.165   23:51:43	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:13.165   23:51:43	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:13.424   23:51:44	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:13.424   23:51:44	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:18:13.682    23:51:44	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:18:13.682    23:51:44	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:18:13.940   23:51:44	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:18:13.940   23:51:44	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:13.940   23:51:44	-- common/autotest_common.sh@650 -- # local es=0
00:18:13.940   23:51:44	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:13.940   23:51:44	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:13.940   23:51:44	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:13.940    23:51:44	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:13.940   23:51:44	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:13.940    23:51:44	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:13.940   23:51:44	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:13.940   23:51:44	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:13.940   23:51:44	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:18:13.941   23:51:44	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:14.199  [2024-12-13 23:51:44.722784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:18:14.199  [2024-12-13 23:51:44.724446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:18:14.199  [2024-12-13 23:51:44.724498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:18:14.199  [2024-12-13 23:51:44.724542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:18:14.199  [2024-12-13 23:51:44.724594] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:18:14.199  [2024-12-13 23:51:44.724671] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:18:14.199  [2024-12-13 23:51:44.724704] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:18:14.199  [2024-12-13 23:51:44.724760] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:18:14.199  [2024-12-13 23:51:44.724785] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:14.199  [2024-12-13 23:51:44.724794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring
00:18:14.199  request:
00:18:14.199  {
00:18:14.199    "name": "raid_bdev1",
00:18:14.199    "raid_level": "concat",
00:18:14.199    "base_bdevs": [
00:18:14.199      "malloc1",
00:18:14.199      "malloc2",
00:18:14.199      "malloc3",
00:18:14.199      "malloc4"
00:18:14.199    ],
00:18:14.199    "superblock": false,
00:18:14.199    "strip_size_kb": 64,
00:18:14.199    "method": "bdev_raid_create",
00:18:14.199    "req_id": 1
00:18:14.199  }
00:18:14.199  Got JSON-RPC error response
00:18:14.199  response:
00:18:14.199  {
00:18:14.199    "code": -17,
00:18:14.199    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:18:14.199  }
00:18:14.199   23:51:44	-- common/autotest_common.sh@653 -- # es=1
00:18:14.199   23:51:44	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:18:14.199   23:51:44	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:18:14.199   23:51:44	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:18:14.199    23:51:44	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:14.199    23:51:44	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:18:14.458   23:51:44	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:18:14.458   23:51:44	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:18:14.458   23:51:44	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:14.458  [2024-12-13 23:51:45.154814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:14.458  [2024-12-13 23:51:45.154877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:14.458  [2024-12-13 23:51:45.154911] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:18:14.458  [2024-12-13 23:51:45.154938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:14.458  [2024-12-13 23:51:45.157130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:14.458  [2024-12-13 23:51:45.157195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:14.458  [2024-12-13 23:51:45.157289] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:18:14.458  [2024-12-13 23:51:45.157344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:14.458  pt1
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:14.458   23:51:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:14.458    23:51:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:14.458    23:51:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:14.716   23:51:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:14.716    "name": "raid_bdev1",
00:18:14.716    "uuid": "144b758d-0e2c-426e-8b1c-111b3d30a21d",
00:18:14.716    "strip_size_kb": 64,
00:18:14.716    "state": "configuring",
00:18:14.716    "raid_level": "concat",
00:18:14.716    "superblock": true,
00:18:14.716    "num_base_bdevs": 4,
00:18:14.716    "num_base_bdevs_discovered": 1,
00:18:14.716    "num_base_bdevs_operational": 4,
00:18:14.716    "base_bdevs_list": [
00:18:14.716      {
00:18:14.716        "name": "pt1",
00:18:14.716        "uuid": "18fd99ce-9d14-5c13-9ae4-259a2b41e2a8",
00:18:14.716        "is_configured": true,
00:18:14.716        "data_offset": 2048,
00:18:14.716        "data_size": 63488
00:18:14.716      },
00:18:14.716      {
00:18:14.716        "name": null,
00:18:14.716        "uuid": "54f4617e-d014-5b35-9dd6-81d5aeea46f4",
00:18:14.716        "is_configured": false,
00:18:14.716        "data_offset": 2048,
00:18:14.716        "data_size": 63488
00:18:14.716      },
00:18:14.716      {
00:18:14.716        "name": null,
00:18:14.716        "uuid": "a38f1a47-d342-5fb3-af72-470786c28a48",
00:18:14.716        "is_configured": false,
00:18:14.716        "data_offset": 2048,
00:18:14.716        "data_size": 63488
00:18:14.716      },
00:18:14.716      {
00:18:14.716        "name": null,
00:18:14.716        "uuid": "5b9ede48-ec31-5714-84b6-7ec2ea04652f",
00:18:14.716        "is_configured": false,
00:18:14.716        "data_offset": 2048,
00:18:14.716        "data_size": 63488
00:18:14.716      }
00:18:14.716    ]
00:18:14.716  }'
00:18:14.716   23:51:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:14.716   23:51:45	-- common/autotest_common.sh@10 -- # set +x
00:18:15.283   23:51:45	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:18:15.283   23:51:45	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:15.541  [2024-12-13 23:51:46.094959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:15.541  [2024-12-13 23:51:46.095006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:15.541  [2024-12-13 23:51:46.095040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:18:15.541  [2024-12-13 23:51:46.095061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:15.541  [2024-12-13 23:51:46.095429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:15.541  [2024-12-13 23:51:46.095470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:15.541  [2024-12-13 23:51:46.095551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:15.541  [2024-12-13 23:51:46.095571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:15.541  pt2
00:18:15.541   23:51:46	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:15.799  [2024-12-13 23:51:46.339006] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:18:15.799   23:51:46	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:15.800   23:51:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:15.800    23:51:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:15.800    23:51:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:16.057   23:51:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:16.057    "name": "raid_bdev1",
00:18:16.057    "uuid": "144b758d-0e2c-426e-8b1c-111b3d30a21d",
00:18:16.057    "strip_size_kb": 64,
00:18:16.057    "state": "configuring",
00:18:16.057    "raid_level": "concat",
00:18:16.057    "superblock": true,
00:18:16.057    "num_base_bdevs": 4,
00:18:16.057    "num_base_bdevs_discovered": 1,
00:18:16.057    "num_base_bdevs_operational": 4,
00:18:16.057    "base_bdevs_list": [
00:18:16.057      {
00:18:16.057        "name": "pt1",
00:18:16.057        "uuid": "18fd99ce-9d14-5c13-9ae4-259a2b41e2a8",
00:18:16.057        "is_configured": true,
00:18:16.057        "data_offset": 2048,
00:18:16.057        "data_size": 63488
00:18:16.057      },
00:18:16.057      {
00:18:16.057        "name": null,
00:18:16.057        "uuid": "54f4617e-d014-5b35-9dd6-81d5aeea46f4",
00:18:16.057        "is_configured": false,
00:18:16.057        "data_offset": 2048,
00:18:16.057        "data_size": 63488
00:18:16.057      },
00:18:16.057      {
00:18:16.057        "name": null,
00:18:16.057        "uuid": "a38f1a47-d342-5fb3-af72-470786c28a48",
00:18:16.057        "is_configured": false,
00:18:16.057        "data_offset": 2048,
00:18:16.057        "data_size": 63488
00:18:16.057      },
00:18:16.057      {
00:18:16.057        "name": null,
00:18:16.057        "uuid": "5b9ede48-ec31-5714-84b6-7ec2ea04652f",
00:18:16.057        "is_configured": false,
00:18:16.057        "data_offset": 2048,
00:18:16.057        "data_size": 63488
00:18:16.057      }
00:18:16.057    ]
00:18:16.057  }'
00:18:16.057   23:51:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:16.057   23:51:46	-- common/autotest_common.sh@10 -- # set +x
00:18:16.622   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:18:16.622   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:16.622   23:51:47	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:16.880  [2024-12-13 23:51:47.399187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:16.880  [2024-12-13 23:51:47.399242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:16.880  [2024-12-13 23:51:47.399275] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:18:16.880  [2024-12-13 23:51:47.399297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:16.880  [2024-12-13 23:51:47.399674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:16.880  [2024-12-13 23:51:47.399732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:16.880  [2024-12-13 23:51:47.399807] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:16.880  [2024-12-13 23:51:47.399826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:16.880  pt2
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:16.880  [2024-12-13 23:51:47.583207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:16.880  [2024-12-13 23:51:47.583262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:16.880  [2024-12-13 23:51:47.583286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:18:16.880  [2024-12-13 23:51:47.583309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:16.880  [2024-12-13 23:51:47.583651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:16.880  [2024-12-13 23:51:47.583703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:16.880  [2024-12-13 23:51:47.583774] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:18:16.880  [2024-12-13 23:51:47.583793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:16.880  pt3
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:16.880   23:51:47	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:17.138  [2024-12-13 23:51:47.755246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:17.138  [2024-12-13 23:51:47.755304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:17.138  [2024-12-13 23:51:47.755338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:18:17.138  [2024-12-13 23:51:47.755364] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:17.138  [2024-12-13 23:51:47.755708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:17.138  [2024-12-13 23:51:47.755760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:17.138  [2024-12-13 23:51:47.755839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:18:17.138  [2024-12-13 23:51:47.755859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:17.138  [2024-12-13 23:51:47.755968] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:18:17.138  [2024-12-13 23:51:47.755980] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:18:17.138  [2024-12-13 23:51:47.756065] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:18:17.138  [2024-12-13 23:51:47.756358] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:18:17.138  [2024-12-13 23:51:47.756379] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:18:17.138  [2024-12-13 23:51:47.756488] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:17.138  pt4
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:17.138   23:51:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:17.138    23:51:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:17.138    23:51:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:17.397   23:51:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:17.397    "name": "raid_bdev1",
00:18:17.397    "uuid": "144b758d-0e2c-426e-8b1c-111b3d30a21d",
00:18:17.397    "strip_size_kb": 64,
00:18:17.397    "state": "online",
00:18:17.397    "raid_level": "concat",
00:18:17.397    "superblock": true,
00:18:17.397    "num_base_bdevs": 4,
00:18:17.397    "num_base_bdevs_discovered": 4,
00:18:17.397    "num_base_bdevs_operational": 4,
00:18:17.397    "base_bdevs_list": [
00:18:17.397      {
00:18:17.397        "name": "pt1",
00:18:17.397        "uuid": "18fd99ce-9d14-5c13-9ae4-259a2b41e2a8",
00:18:17.397        "is_configured": true,
00:18:17.397        "data_offset": 2048,
00:18:17.397        "data_size": 63488
00:18:17.397      },
00:18:17.397      {
00:18:17.397        "name": "pt2",
00:18:17.397        "uuid": "54f4617e-d014-5b35-9dd6-81d5aeea46f4",
00:18:17.397        "is_configured": true,
00:18:17.397        "data_offset": 2048,
00:18:17.397        "data_size": 63488
00:18:17.397      },
00:18:17.397      {
00:18:17.397        "name": "pt3",
00:18:17.397        "uuid": "a38f1a47-d342-5fb3-af72-470786c28a48",
00:18:17.397        "is_configured": true,
00:18:17.397        "data_offset": 2048,
00:18:17.397        "data_size": 63488
00:18:17.397      },
00:18:17.397      {
00:18:17.397        "name": "pt4",
00:18:17.397        "uuid": "5b9ede48-ec31-5714-84b6-7ec2ea04652f",
00:18:17.397        "is_configured": true,
00:18:17.397        "data_offset": 2048,
00:18:17.397        "data_size": 63488
00:18:17.397      }
00:18:17.397    ]
00:18:17.397  }'
00:18:17.397   23:51:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:17.397   23:51:47	-- common/autotest_common.sh@10 -- # set +x
00:18:17.964    23:51:48	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:18:17.964    23:51:48	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:18.223  [2024-12-13 23:51:48.799584] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:18.223   23:51:48	-- bdev/bdev_raid.sh@430 -- # '[' 144b758d-0e2c-426e-8b1c-111b3d30a21d '!=' 144b758d-0e2c-426e-8b1c-111b3d30a21d ']'
00:18:18.223   23:51:48	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:18:18.223   23:51:48	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:18.223   23:51:48	-- bdev/bdev_raid.sh@197 -- # return 1
00:18:18.223   23:51:48	-- bdev/bdev_raid.sh@511 -- # killprocess 120428
00:18:18.223   23:51:48	-- common/autotest_common.sh@936 -- # '[' -z 120428 ']'
00:18:18.223   23:51:48	-- common/autotest_common.sh@940 -- # kill -0 120428
00:18:18.223    23:51:48	-- common/autotest_common.sh@941 -- # uname
00:18:18.223   23:51:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:18.223    23:51:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120428
00:18:18.223   23:51:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:18.223   23:51:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:18.223  killing process with pid 120428
00:18:18.223   23:51:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 120428'
00:18:18.223   23:51:48	-- common/autotest_common.sh@955 -- # kill 120428
00:18:18.223  [2024-12-13 23:51:48.840772] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:18.223  [2024-12-13 23:51:48.840817] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:18.223   23:51:48	-- common/autotest_common.sh@960 -- # wait 120428
00:18:18.223  [2024-12-13 23:51:48.840866] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:18.223  [2024-12-13 23:51:48.840876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:18:18.481  [2024-12-13 23:51:49.107755] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:19.417   23:51:50	-- bdev/bdev_raid.sh@513 -- # return 0
00:18:19.417  
00:18:19.417  real	0m10.946s
00:18:19.417  user	0m18.950s
00:18:19.417  sys	0m1.342s
00:18:19.417   23:51:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:19.417   23:51:50	-- common/autotest_common.sh@10 -- # set +x
00:18:19.417  ************************************
00:18:19.417  END TEST raid_superblock_test
00:18:19.417  ************************************
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false
00:18:19.676   23:51:50	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:18:19.676   23:51:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:19.676   23:51:50	-- common/autotest_common.sh@10 -- # set +x
00:18:19.676  ************************************
00:18:19.676  START TEST raid_state_function_test
00:18:19.676  ************************************
00:18:19.676   23:51:50	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:19.676    23:51:50	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@226 -- # raid_pid=120750
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120750'
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:18:19.676  Process raid pid: 120750
00:18:19.676   23:51:50	-- bdev/bdev_raid.sh@228 -- # waitforlisten 120750 /var/tmp/spdk-raid.sock
00:18:19.676   23:51:50	-- common/autotest_common.sh@829 -- # '[' -z 120750 ']'
00:18:19.676   23:51:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:19.676   23:51:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:19.676   23:51:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:19.676  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:19.676   23:51:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:19.676   23:51:50	-- common/autotest_common.sh@10 -- # set +x
00:18:19.676  [2024-12-13 23:51:50.253922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:19.676  [2024-12-13 23:51:50.254118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:19.935  [2024-12-13 23:51:50.425747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:19.935  [2024-12-13 23:51:50.619388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:20.194  [2024-12-13 23:51:50.806080] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:20.453   23:51:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:20.453   23:51:51	-- common/autotest_common.sh@862 -- # return 0
00:18:20.453   23:51:51	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:20.712  [2024-12-13 23:51:51.365036] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:18:20.712  [2024-12-13 23:51:51.365116] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:18:20.712  [2024-12-13 23:51:51.365128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:20.712  [2024-12-13 23:51:51.365150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:20.712  [2024-12-13 23:51:51.365157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:20.712  [2024-12-13 23:51:51.365194] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:20.712  [2024-12-13 23:51:51.365202] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:20.712  [2024-12-13 23:51:51.365223] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:20.712   23:51:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:20.712    23:51:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:20.712    23:51:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:20.971   23:51:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:20.971    "name": "Existed_Raid",
00:18:20.971    "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.971    "strip_size_kb": 0,
00:18:20.971    "state": "configuring",
00:18:20.971    "raid_level": "raid1",
00:18:20.971    "superblock": false,
00:18:20.971    "num_base_bdevs": 4,
00:18:20.971    "num_base_bdevs_discovered": 0,
00:18:20.971    "num_base_bdevs_operational": 4,
00:18:20.971    "base_bdevs_list": [
00:18:20.971      {
00:18:20.971        "name": "BaseBdev1",
00:18:20.971        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.971        "is_configured": false,
00:18:20.971        "data_offset": 0,
00:18:20.971        "data_size": 0
00:18:20.971      },
00:18:20.971      {
00:18:20.971        "name": "BaseBdev2",
00:18:20.971        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.971        "is_configured": false,
00:18:20.971        "data_offset": 0,
00:18:20.971        "data_size": 0
00:18:20.971      },
00:18:20.971      {
00:18:20.971        "name": "BaseBdev3",
00:18:20.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.972        "is_configured": false,
00:18:20.972        "data_offset": 0,
00:18:20.972        "data_size": 0
00:18:20.972      },
00:18:20.972      {
00:18:20.972        "name": "BaseBdev4",
00:18:20.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.972        "is_configured": false,
00:18:20.972        "data_offset": 0,
00:18:20.972        "data_size": 0
00:18:20.972      }
00:18:20.972    ]
00:18:20.972  }'
00:18:20.972   23:51:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:20.972   23:51:51	-- common/autotest_common.sh@10 -- # set +x
00:18:21.907   23:51:52	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:18:21.907  [2024-12-13 23:51:52.541082] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:21.907  [2024-12-13 23:51:52.541113] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:18:21.907   23:51:52	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:22.166  [2024-12-13 23:51:52.805147] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:18:22.166  [2024-12-13 23:51:52.805200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:18:22.166  [2024-12-13 23:51:52.805211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:22.166  [2024-12-13 23:51:52.805234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:22.166  [2024-12-13 23:51:52.805242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:22.166  [2024-12-13 23:51:52.805275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:22.166  [2024-12-13 23:51:52.805282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:22.166  [2024-12-13 23:51:52.805304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:22.166   23:51:52	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:18:22.424  [2024-12-13 23:51:53.098490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:22.424  BaseBdev1
00:18:22.424   23:51:53	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:18:22.424   23:51:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:18:22.424   23:51:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:22.424   23:51:53	-- common/autotest_common.sh@899 -- # local i
00:18:22.424   23:51:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:22.424   23:51:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:22.424   23:51:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:22.682   23:51:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:18:22.941  [
00:18:22.941    {
00:18:22.941      "name": "BaseBdev1",
00:18:22.941      "aliases": [
00:18:22.941        "2df884c1-a131-4eb6-b9cf-7bc599d7a748"
00:18:22.941      ],
00:18:22.941      "product_name": "Malloc disk",
00:18:22.941      "block_size": 512,
00:18:22.941      "num_blocks": 65536,
00:18:22.941      "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:22.941      "assigned_rate_limits": {
00:18:22.941        "rw_ios_per_sec": 0,
00:18:22.941        "rw_mbytes_per_sec": 0,
00:18:22.941        "r_mbytes_per_sec": 0,
00:18:22.941        "w_mbytes_per_sec": 0
00:18:22.941      },
00:18:22.941      "claimed": true,
00:18:22.941      "claim_type": "exclusive_write",
00:18:22.941      "zoned": false,
00:18:22.941      "supported_io_types": {
00:18:22.941        "read": true,
00:18:22.941        "write": true,
00:18:22.941        "unmap": true,
00:18:22.941        "write_zeroes": true,
00:18:22.941        "flush": true,
00:18:22.941        "reset": true,
00:18:22.941        "compare": false,
00:18:22.941        "compare_and_write": false,
00:18:22.941        "abort": true,
00:18:22.941        "nvme_admin": false,
00:18:22.941        "nvme_io": false
00:18:22.941      },
00:18:22.941      "memory_domains": [
00:18:22.941        {
00:18:22.941          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:22.941          "dma_device_type": 2
00:18:22.941        }
00:18:22.941      ],
00:18:22.941      "driver_specific": {}
00:18:22.941    }
00:18:22.941  ]
00:18:22.941   23:51:53	-- common/autotest_common.sh@905 -- # return 0
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:22.941   23:51:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:22.941    23:51:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:22.941    23:51:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:23.200   23:51:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:23.200    "name": "Existed_Raid",
00:18:23.200    "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.200    "strip_size_kb": 0,
00:18:23.200    "state": "configuring",
00:18:23.200    "raid_level": "raid1",
00:18:23.200    "superblock": false,
00:18:23.200    "num_base_bdevs": 4,
00:18:23.200    "num_base_bdevs_discovered": 1,
00:18:23.200    "num_base_bdevs_operational": 4,
00:18:23.200    "base_bdevs_list": [
00:18:23.200      {
00:18:23.200        "name": "BaseBdev1",
00:18:23.200        "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:23.200        "is_configured": true,
00:18:23.200        "data_offset": 0,
00:18:23.200        "data_size": 65536
00:18:23.200      },
00:18:23.200      {
00:18:23.200        "name": "BaseBdev2",
00:18:23.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.200        "is_configured": false,
00:18:23.200        "data_offset": 0,
00:18:23.200        "data_size": 0
00:18:23.200      },
00:18:23.200      {
00:18:23.200        "name": "BaseBdev3",
00:18:23.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.200        "is_configured": false,
00:18:23.200        "data_offset": 0,
00:18:23.200        "data_size": 0
00:18:23.200      },
00:18:23.200      {
00:18:23.200        "name": "BaseBdev4",
00:18:23.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.200        "is_configured": false,
00:18:23.200        "data_offset": 0,
00:18:23.200        "data_size": 0
00:18:23.200      }
00:18:23.200    ]
00:18:23.200  }'
00:18:23.200   23:51:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:23.200   23:51:53	-- common/autotest_common.sh@10 -- # set +x
00:18:23.768   23:51:54	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:18:23.768  [2024-12-13 23:51:54.458706] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:23.768  [2024-12-13 23:51:54.458754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:18:23.768   23:51:54	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:18:23.768   23:51:54	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:24.026  [2024-12-13 23:51:54.710786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:24.026  [2024-12-13 23:51:54.712768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:24.026  [2024-12-13 23:51:54.712838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:24.026  [2024-12-13 23:51:54.712849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:24.026  [2024-12-13 23:51:54.712873] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:24.026  [2024-12-13 23:51:54.712881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:24.026  [2024-12-13 23:51:54.712897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:24.026   23:51:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:24.027   23:51:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:24.027   23:51:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:24.027   23:51:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:24.027    23:51:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:24.027    23:51:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:24.285   23:51:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:24.285    "name": "Existed_Raid",
00:18:24.285    "uuid": "00000000-0000-0000-0000-000000000000",
00:18:24.285    "strip_size_kb": 0,
00:18:24.285    "state": "configuring",
00:18:24.285    "raid_level": "raid1",
00:18:24.285    "superblock": false,
00:18:24.285    "num_base_bdevs": 4,
00:18:24.285    "num_base_bdevs_discovered": 1,
00:18:24.285    "num_base_bdevs_operational": 4,
00:18:24.285    "base_bdevs_list": [
00:18:24.285      {
00:18:24.285        "name": "BaseBdev1",
00:18:24.285        "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:24.285        "is_configured": true,
00:18:24.285        "data_offset": 0,
00:18:24.285        "data_size": 65536
00:18:24.285      },
00:18:24.285      {
00:18:24.285        "name": "BaseBdev2",
00:18:24.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:24.285        "is_configured": false,
00:18:24.285        "data_offset": 0,
00:18:24.285        "data_size": 0
00:18:24.285      },
00:18:24.285      {
00:18:24.285        "name": "BaseBdev3",
00:18:24.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:24.285        "is_configured": false,
00:18:24.285        "data_offset": 0,
00:18:24.285        "data_size": 0
00:18:24.285      },
00:18:24.285      {
00:18:24.285        "name": "BaseBdev4",
00:18:24.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:24.285        "is_configured": false,
00:18:24.285        "data_offset": 0,
00:18:24.285        "data_size": 0
00:18:24.286      }
00:18:24.286    ]
00:18:24.286  }'
00:18:24.286   23:51:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:24.286   23:51:54	-- common/autotest_common.sh@10 -- # set +x
00:18:24.852   23:51:55	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:18:25.111  [2024-12-13 23:51:55.765539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:25.111  BaseBdev2
00:18:25.111   23:51:55	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:18:25.111   23:51:55	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:18:25.111   23:51:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:25.111   23:51:55	-- common/autotest_common.sh@899 -- # local i
00:18:25.111   23:51:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:25.111   23:51:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:25.111   23:51:55	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:25.369   23:51:55	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:18:25.627  [
00:18:25.627    {
00:18:25.627      "name": "BaseBdev2",
00:18:25.627      "aliases": [
00:18:25.627        "9ba05db7-4abc-4f91-b6f9-d0164685aad4"
00:18:25.627      ],
00:18:25.627      "product_name": "Malloc disk",
00:18:25.627      "block_size": 512,
00:18:25.627      "num_blocks": 65536,
00:18:25.627      "uuid": "9ba05db7-4abc-4f91-b6f9-d0164685aad4",
00:18:25.627      "assigned_rate_limits": {
00:18:25.627        "rw_ios_per_sec": 0,
00:18:25.627        "rw_mbytes_per_sec": 0,
00:18:25.627        "r_mbytes_per_sec": 0,
00:18:25.627        "w_mbytes_per_sec": 0
00:18:25.627      },
00:18:25.627      "claimed": true,
00:18:25.627      "claim_type": "exclusive_write",
00:18:25.627      "zoned": false,
00:18:25.627      "supported_io_types": {
00:18:25.627        "read": true,
00:18:25.627        "write": true,
00:18:25.627        "unmap": true,
00:18:25.627        "write_zeroes": true,
00:18:25.627        "flush": true,
00:18:25.627        "reset": true,
00:18:25.627        "compare": false,
00:18:25.627        "compare_and_write": false,
00:18:25.627        "abort": true,
00:18:25.627        "nvme_admin": false,
00:18:25.627        "nvme_io": false
00:18:25.627      },
00:18:25.627      "memory_domains": [
00:18:25.627        {
00:18:25.627          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:25.627          "dma_device_type": 2
00:18:25.627        }
00:18:25.627      ],
00:18:25.627      "driver_specific": {}
00:18:25.627    }
00:18:25.627  ]
00:18:25.627   23:51:56	-- common/autotest_common.sh@905 -- # return 0
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:25.627   23:51:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:25.627    23:51:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:25.627    23:51:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:25.886   23:51:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:25.886    "name": "Existed_Raid",
00:18:25.886    "uuid": "00000000-0000-0000-0000-000000000000",
00:18:25.886    "strip_size_kb": 0,
00:18:25.886    "state": "configuring",
00:18:25.886    "raid_level": "raid1",
00:18:25.886    "superblock": false,
00:18:25.886    "num_base_bdevs": 4,
00:18:25.886    "num_base_bdevs_discovered": 2,
00:18:25.886    "num_base_bdevs_operational": 4,
00:18:25.886    "base_bdevs_list": [
00:18:25.886      {
00:18:25.886        "name": "BaseBdev1",
00:18:25.886        "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:25.886        "is_configured": true,
00:18:25.886        "data_offset": 0,
00:18:25.886        "data_size": 65536
00:18:25.886      },
00:18:25.886      {
00:18:25.886        "name": "BaseBdev2",
00:18:25.886        "uuid": "9ba05db7-4abc-4f91-b6f9-d0164685aad4",
00:18:25.886        "is_configured": true,
00:18:25.886        "data_offset": 0,
00:18:25.886        "data_size": 65536
00:18:25.886      },
00:18:25.886      {
00:18:25.886        "name": "BaseBdev3",
00:18:25.886        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:25.886        "is_configured": false,
00:18:25.886        "data_offset": 0,
00:18:25.886        "data_size": 0
00:18:25.886      },
00:18:25.886      {
00:18:25.886        "name": "BaseBdev4",
00:18:25.886        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:25.886        "is_configured": false,
00:18:25.886        "data_offset": 0,
00:18:25.886        "data_size": 0
00:18:25.886      }
00:18:25.886    ]
00:18:25.886  }'
00:18:25.886   23:51:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:25.886   23:51:56	-- common/autotest_common.sh@10 -- # set +x
00:18:26.452   23:51:56	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:18:26.711  [2024-12-13 23:51:57.189444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:18:26.711  BaseBdev3
00:18:26.711   23:51:57	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:18:26.711   23:51:57	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:18:26.711   23:51:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:26.711   23:51:57	-- common/autotest_common.sh@899 -- # local i
00:18:26.711   23:51:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:26.711   23:51:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:26.711   23:51:57	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:26.711   23:51:57	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:18:26.970  [
00:18:26.970    {
00:18:26.970      "name": "BaseBdev3",
00:18:26.970      "aliases": [
00:18:26.970        "f4240835-1f36-4774-a735-70a64d772671"
00:18:26.970      ],
00:18:26.970      "product_name": "Malloc disk",
00:18:26.970      "block_size": 512,
00:18:26.970      "num_blocks": 65536,
00:18:26.970      "uuid": "f4240835-1f36-4774-a735-70a64d772671",
00:18:26.970      "assigned_rate_limits": {
00:18:26.970        "rw_ios_per_sec": 0,
00:18:26.970        "rw_mbytes_per_sec": 0,
00:18:26.970        "r_mbytes_per_sec": 0,
00:18:26.970        "w_mbytes_per_sec": 0
00:18:26.970      },
00:18:26.970      "claimed": true,
00:18:26.970      "claim_type": "exclusive_write",
00:18:26.970      "zoned": false,
00:18:26.970      "supported_io_types": {
00:18:26.970        "read": true,
00:18:26.970        "write": true,
00:18:26.970        "unmap": true,
00:18:26.970        "write_zeroes": true,
00:18:26.970        "flush": true,
00:18:26.970        "reset": true,
00:18:26.970        "compare": false,
00:18:26.970        "compare_and_write": false,
00:18:26.970        "abort": true,
00:18:26.970        "nvme_admin": false,
00:18:26.970        "nvme_io": false
00:18:26.970      },
00:18:26.970      "memory_domains": [
00:18:26.970        {
00:18:26.970          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:26.970          "dma_device_type": 2
00:18:26.970        }
00:18:26.970      ],
00:18:26.970      "driver_specific": {}
00:18:26.970    }
00:18:26.970  ]
00:18:26.970   23:51:57	-- common/autotest_common.sh@905 -- # return 0
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:26.970   23:51:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:26.970    23:51:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:26.970    23:51:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:27.229   23:51:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:27.229    "name": "Existed_Raid",
00:18:27.229    "uuid": "00000000-0000-0000-0000-000000000000",
00:18:27.229    "strip_size_kb": 0,
00:18:27.229    "state": "configuring",
00:18:27.229    "raid_level": "raid1",
00:18:27.229    "superblock": false,
00:18:27.229    "num_base_bdevs": 4,
00:18:27.229    "num_base_bdevs_discovered": 3,
00:18:27.229    "num_base_bdevs_operational": 4,
00:18:27.229    "base_bdevs_list": [
00:18:27.229      {
00:18:27.229        "name": "BaseBdev1",
00:18:27.229        "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:27.229        "is_configured": true,
00:18:27.229        "data_offset": 0,
00:18:27.229        "data_size": 65536
00:18:27.229      },
00:18:27.229      {
00:18:27.229        "name": "BaseBdev2",
00:18:27.229        "uuid": "9ba05db7-4abc-4f91-b6f9-d0164685aad4",
00:18:27.229        "is_configured": true,
00:18:27.229        "data_offset": 0,
00:18:27.229        "data_size": 65536
00:18:27.229      },
00:18:27.229      {
00:18:27.229        "name": "BaseBdev3",
00:18:27.229        "uuid": "f4240835-1f36-4774-a735-70a64d772671",
00:18:27.229        "is_configured": true,
00:18:27.229        "data_offset": 0,
00:18:27.229        "data_size": 65536
00:18:27.229      },
00:18:27.229      {
00:18:27.229        "name": "BaseBdev4",
00:18:27.229        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:27.229        "is_configured": false,
00:18:27.229        "data_offset": 0,
00:18:27.229        "data_size": 0
00:18:27.229      }
00:18:27.229    ]
00:18:27.229  }'
00:18:27.229   23:51:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:27.229   23:51:57	-- common/autotest_common.sh@10 -- # set +x
00:18:27.796   23:51:58	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:18:28.056  [2024-12-13 23:51:58.677211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:18:28.056  [2024-12-13 23:51:58.677270] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:18:28.056  [2024-12-13 23:51:58.677279] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:18:28.056  [2024-12-13 23:51:58.677433] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:18:28.056  [2024-12-13 23:51:58.677799] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:18:28.056  [2024-12-13 23:51:58.677899] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:18:28.057  [2024-12-13 23:51:58.678319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:28.057  BaseBdev4
00:18:28.057   23:51:58	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:18:28.057   23:51:58	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:18:28.057   23:51:58	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:28.057   23:51:58	-- common/autotest_common.sh@899 -- # local i
00:18:28.057   23:51:58	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:28.057   23:51:58	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:28.057   23:51:58	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:28.346   23:51:58	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:18:28.630  [
00:18:28.630    {
00:18:28.630      "name": "BaseBdev4",
00:18:28.630      "aliases": [
00:18:28.630        "87f1ad3a-d667-4814-ba2f-7ddcf5c684c2"
00:18:28.630      ],
00:18:28.630      "product_name": "Malloc disk",
00:18:28.630      "block_size": 512,
00:18:28.630      "num_blocks": 65536,
00:18:28.630      "uuid": "87f1ad3a-d667-4814-ba2f-7ddcf5c684c2",
00:18:28.630      "assigned_rate_limits": {
00:18:28.630        "rw_ios_per_sec": 0,
00:18:28.630        "rw_mbytes_per_sec": 0,
00:18:28.630        "r_mbytes_per_sec": 0,
00:18:28.630        "w_mbytes_per_sec": 0
00:18:28.630      },
00:18:28.630      "claimed": true,
00:18:28.630      "claim_type": "exclusive_write",
00:18:28.630      "zoned": false,
00:18:28.630      "supported_io_types": {
00:18:28.630        "read": true,
00:18:28.630        "write": true,
00:18:28.630        "unmap": true,
00:18:28.630        "write_zeroes": true,
00:18:28.630        "flush": true,
00:18:28.630        "reset": true,
00:18:28.630        "compare": false,
00:18:28.630        "compare_and_write": false,
00:18:28.630        "abort": true,
00:18:28.630        "nvme_admin": false,
00:18:28.630        "nvme_io": false
00:18:28.630      },
00:18:28.630      "memory_domains": [
00:18:28.630        {
00:18:28.630          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:28.630          "dma_device_type": 2
00:18:28.630        }
00:18:28.630      ],
00:18:28.630      "driver_specific": {}
00:18:28.630    }
00:18:28.630  ]
00:18:28.630   23:51:59	-- common/autotest_common.sh@905 -- # return 0
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:28.630    23:51:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:28.630    23:51:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:28.630    "name": "Existed_Raid",
00:18:28.630    "uuid": "945b0b74-219f-48fa-828c-8146c6a7b832",
00:18:28.630    "strip_size_kb": 0,
00:18:28.630    "state": "online",
00:18:28.630    "raid_level": "raid1",
00:18:28.630    "superblock": false,
00:18:28.630    "num_base_bdevs": 4,
00:18:28.630    "num_base_bdevs_discovered": 4,
00:18:28.630    "num_base_bdevs_operational": 4,
00:18:28.630    "base_bdevs_list": [
00:18:28.630      {
00:18:28.630        "name": "BaseBdev1",
00:18:28.630        "uuid": "2df884c1-a131-4eb6-b9cf-7bc599d7a748",
00:18:28.630        "is_configured": true,
00:18:28.630        "data_offset": 0,
00:18:28.630        "data_size": 65536
00:18:28.630      },
00:18:28.630      {
00:18:28.630        "name": "BaseBdev2",
00:18:28.630        "uuid": "9ba05db7-4abc-4f91-b6f9-d0164685aad4",
00:18:28.630        "is_configured": true,
00:18:28.630        "data_offset": 0,
00:18:28.630        "data_size": 65536
00:18:28.630      },
00:18:28.630      {
00:18:28.630        "name": "BaseBdev3",
00:18:28.630        "uuid": "f4240835-1f36-4774-a735-70a64d772671",
00:18:28.630        "is_configured": true,
00:18:28.630        "data_offset": 0,
00:18:28.630        "data_size": 65536
00:18:28.630      },
00:18:28.630      {
00:18:28.630        "name": "BaseBdev4",
00:18:28.630        "uuid": "87f1ad3a-d667-4814-ba2f-7ddcf5c684c2",
00:18:28.630        "is_configured": true,
00:18:28.630        "data_offset": 0,
00:18:28.630        "data_size": 65536
00:18:28.630      }
00:18:28.630    ]
00:18:28.630  }'
00:18:28.630   23:51:59	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:28.630   23:51:59	-- common/autotest_common.sh@10 -- # set +x
00:18:29.197   23:51:59	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:29.456  [2024-12-13 23:52:00.017499] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@196 -- # return 0
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:29.456   23:52:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:29.456    23:52:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:29.456    23:52:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:29.714   23:52:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:29.714    "name": "Existed_Raid",
00:18:29.714    "uuid": "945b0b74-219f-48fa-828c-8146c6a7b832",
00:18:29.714    "strip_size_kb": 0,
00:18:29.714    "state": "online",
00:18:29.714    "raid_level": "raid1",
00:18:29.714    "superblock": false,
00:18:29.714    "num_base_bdevs": 4,
00:18:29.714    "num_base_bdevs_discovered": 3,
00:18:29.714    "num_base_bdevs_operational": 3,
00:18:29.714    "base_bdevs_list": [
00:18:29.714      {
00:18:29.714        "name": null,
00:18:29.714        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:29.714        "is_configured": false,
00:18:29.714        "data_offset": 0,
00:18:29.714        "data_size": 65536
00:18:29.714      },
00:18:29.714      {
00:18:29.714        "name": "BaseBdev2",
00:18:29.714        "uuid": "9ba05db7-4abc-4f91-b6f9-d0164685aad4",
00:18:29.714        "is_configured": true,
00:18:29.714        "data_offset": 0,
00:18:29.714        "data_size": 65536
00:18:29.714      },
00:18:29.714      {
00:18:29.714        "name": "BaseBdev3",
00:18:29.714        "uuid": "f4240835-1f36-4774-a735-70a64d772671",
00:18:29.714        "is_configured": true,
00:18:29.714        "data_offset": 0,
00:18:29.714        "data_size": 65536
00:18:29.714      },
00:18:29.714      {
00:18:29.714        "name": "BaseBdev4",
00:18:29.714        "uuid": "87f1ad3a-d667-4814-ba2f-7ddcf5c684c2",
00:18:29.714        "is_configured": true,
00:18:29.714        "data_offset": 0,
00:18:29.714        "data_size": 65536
00:18:29.714      }
00:18:29.714    ]
00:18:29.714  }'
00:18:29.714   23:52:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:29.714   23:52:00	-- common/autotest_common.sh@10 -- # set +x
00:18:30.281   23:52:00	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:18:30.281   23:52:00	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:30.281    23:52:00	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:30.281    23:52:00	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:30.539   23:52:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:30.539   23:52:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:30.539   23:52:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:18:30.798  [2024-12-13 23:52:01.280939] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:18:30.798   23:52:01	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:30.798   23:52:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:30.798    23:52:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:30.798    23:52:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:31.057   23:52:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:31.057   23:52:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:31.057   23:52:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:18:31.314  [2024-12-13 23:52:01.829036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:18:31.314   23:52:01	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:31.314   23:52:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:31.314    23:52:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:31.314    23:52:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:31.573   23:52:02	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:31.573   23:52:02	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:31.573   23:52:02	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:18:31.573  [2024-12-13 23:52:02.272153] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:18:31.573  [2024-12-13 23:52:02.272313] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:31.573  [2024-12-13 23:52:02.272525] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:31.832  [2024-12-13 23:52:02.338820] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:31.832  [2024-12-13 23:52:02.339002] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:18:31.832   23:52:02	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:31.832   23:52:02	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:31.832    23:52:02	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:31.832    23:52:02	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:18:31.832   23:52:02	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:18:31.832   23:52:02	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:18:31.832   23:52:02	-- bdev/bdev_raid.sh@287 -- # killprocess 120750
00:18:31.832   23:52:02	-- common/autotest_common.sh@936 -- # '[' -z 120750 ']'
00:18:31.832   23:52:02	-- common/autotest_common.sh@940 -- # kill -0 120750
00:18:31.832    23:52:02	-- common/autotest_common.sh@941 -- # uname
00:18:31.832   23:52:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:31.832    23:52:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120750
00:18:32.091   23:52:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:32.091   23:52:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:32.091   23:52:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 120750'
00:18:32.091  killing process with pid 120750
00:18:32.091   23:52:02	-- common/autotest_common.sh@955 -- # kill 120750
00:18:32.091  [2024-12-13 23:52:02.573694] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:32.091   23:52:02	-- common/autotest_common.sh@960 -- # wait 120750
00:18:32.091  [2024-12-13 23:52:02.573999] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@289 -- # return 0
00:18:33.025  
00:18:33.025  real	0m13.409s
00:18:33.025  user	0m23.776s
00:18:33.025  sys	0m1.687s
00:18:33.025   23:52:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:33.025   23:52:03	-- common/autotest_common.sh@10 -- # set +x
00:18:33.025  ************************************
00:18:33.025  END TEST raid_state_function_test
00:18:33.025  ************************************
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true
00:18:33.025   23:52:03	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:18:33.025   23:52:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:33.025   23:52:03	-- common/autotest_common.sh@10 -- # set +x
00:18:33.025  ************************************
00:18:33.025  START TEST raid_state_function_test_sb
00:18:33.025  ************************************
00:18:33.025   23:52:03	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:18:33.025    23:52:03	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@226 -- # raid_pid=121177
00:18:33.025  Process raid pid: 121177
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121177'
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:18:33.025   23:52:03	-- bdev/bdev_raid.sh@228 -- # waitforlisten 121177 /var/tmp/spdk-raid.sock
00:18:33.025   23:52:03	-- common/autotest_common.sh@829 -- # '[' -z 121177 ']'
00:18:33.025   23:52:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:33.025   23:52:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:33.025  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:33.025   23:52:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:33.025   23:52:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:33.025   23:52:03	-- common/autotest_common.sh@10 -- # set +x
00:18:33.025  [2024-12-13 23:52:03.734557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:33.025  [2024-12-13 23:52:03.734751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:33.284  [2024-12-13 23:52:03.906300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:33.542  [2024-12-13 23:52:04.097301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:33.800  [2024-12-13 23:52:04.285560] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:34.059   23:52:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:34.059   23:52:04	-- common/autotest_common.sh@862 -- # return 0
00:18:34.059   23:52:04	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:34.317  [2024-12-13 23:52:04.877134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:18:34.317  [2024-12-13 23:52:04.877203] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:18:34.317  [2024-12-13 23:52:04.877215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:34.317  [2024-12-13 23:52:04.877235] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:34.317  [2024-12-13 23:52:04.877242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:34.317  [2024-12-13 23:52:04.877274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:34.317  [2024-12-13 23:52:04.877282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:34.317  [2024-12-13 23:52:04.877301] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:34.317   23:52:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:34.317    23:52:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:34.317    23:52:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:34.582   23:52:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:34.582    "name": "Existed_Raid",
00:18:34.582    "uuid": "c11ffeef-944f-4c9d-8a8c-cb3a2db4862b",
00:18:34.582    "strip_size_kb": 0,
00:18:34.582    "state": "configuring",
00:18:34.582    "raid_level": "raid1",
00:18:34.582    "superblock": true,
00:18:34.582    "num_base_bdevs": 4,
00:18:34.582    "num_base_bdevs_discovered": 0,
00:18:34.582    "num_base_bdevs_operational": 4,
00:18:34.582    "base_bdevs_list": [
00:18:34.582      {
00:18:34.582        "name": "BaseBdev1",
00:18:34.582        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:34.582        "is_configured": false,
00:18:34.582        "data_offset": 0,
00:18:34.582        "data_size": 0
00:18:34.582      },
00:18:34.582      {
00:18:34.582        "name": "BaseBdev2",
00:18:34.582        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:34.582        "is_configured": false,
00:18:34.582        "data_offset": 0,
00:18:34.582        "data_size": 0
00:18:34.582      },
00:18:34.582      {
00:18:34.582        "name": "BaseBdev3",
00:18:34.582        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:34.582        "is_configured": false,
00:18:34.582        "data_offset": 0,
00:18:34.582        "data_size": 0
00:18:34.582      },
00:18:34.582      {
00:18:34.582        "name": "BaseBdev4",
00:18:34.582        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:34.582        "is_configured": false,
00:18:34.582        "data_offset": 0,
00:18:34.582        "data_size": 0
00:18:34.582      }
00:18:34.582    ]
00:18:34.582  }'
00:18:34.582   23:52:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:34.582   23:52:05	-- common/autotest_common.sh@10 -- # set +x
00:18:35.154   23:52:05	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:18:35.412  [2024-12-13 23:52:06.025246] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:35.412  [2024-12-13 23:52:06.025402] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:18:35.412   23:52:06	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:35.670  [2024-12-13 23:52:06.281311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:18:35.670  [2024-12-13 23:52:06.281850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:18:35.670  [2024-12-13 23:52:06.281990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:35.670  [2024-12-13 23:52:06.282150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:35.670  [2024-12-13 23:52:06.282295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:35.670  [2024-12-13 23:52:06.282469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:35.670  [2024-12-13 23:52:06.282579] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:35.670  [2024-12-13 23:52:06.282735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:35.670   23:52:06	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:18:35.929  [2024-12-13 23:52:06.508674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:35.929  BaseBdev1
00:18:35.929   23:52:06	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:18:35.929   23:52:06	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:18:35.929   23:52:06	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:35.929   23:52:06	-- common/autotest_common.sh@899 -- # local i
00:18:35.929   23:52:06	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:35.929   23:52:06	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:35.929   23:52:06	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:36.188   23:52:06	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:18:36.188  [
00:18:36.188    {
00:18:36.188      "name": "BaseBdev1",
00:18:36.188      "aliases": [
00:18:36.188        "b6368825-0dc3-4803-878f-23f129048016"
00:18:36.188      ],
00:18:36.188      "product_name": "Malloc disk",
00:18:36.189      "block_size": 512,
00:18:36.189      "num_blocks": 65536,
00:18:36.189      "uuid": "b6368825-0dc3-4803-878f-23f129048016",
00:18:36.189      "assigned_rate_limits": {
00:18:36.189        "rw_ios_per_sec": 0,
00:18:36.189        "rw_mbytes_per_sec": 0,
00:18:36.189        "r_mbytes_per_sec": 0,
00:18:36.189        "w_mbytes_per_sec": 0
00:18:36.189      },
00:18:36.189      "claimed": true,
00:18:36.189      "claim_type": "exclusive_write",
00:18:36.189      "zoned": false,
00:18:36.189      "supported_io_types": {
00:18:36.189        "read": true,
00:18:36.189        "write": true,
00:18:36.189        "unmap": true,
00:18:36.189        "write_zeroes": true,
00:18:36.189        "flush": true,
00:18:36.189        "reset": true,
00:18:36.189        "compare": false,
00:18:36.189        "compare_and_write": false,
00:18:36.189        "abort": true,
00:18:36.189        "nvme_admin": false,
00:18:36.189        "nvme_io": false
00:18:36.189      },
00:18:36.189      "memory_domains": [
00:18:36.189        {
00:18:36.189          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:36.189          "dma_device_type": 2
00:18:36.189        }
00:18:36.189      ],
00:18:36.189      "driver_specific": {}
00:18:36.189    }
00:18:36.189  ]
00:18:36.189   23:52:06	-- common/autotest_common.sh@905 -- # return 0
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:36.189   23:52:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:36.189    23:52:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:36.189    23:52:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:36.756   23:52:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:36.756    "name": "Existed_Raid",
00:18:36.756    "uuid": "d3b42e32-4eac-48a6-b67d-e6b7560aa9f2",
00:18:36.756    "strip_size_kb": 0,
00:18:36.756    "state": "configuring",
00:18:36.756    "raid_level": "raid1",
00:18:36.756    "superblock": true,
00:18:36.756    "num_base_bdevs": 4,
00:18:36.756    "num_base_bdevs_discovered": 1,
00:18:36.756    "num_base_bdevs_operational": 4,
00:18:36.756    "base_bdevs_list": [
00:18:36.756      {
00:18:36.756        "name": "BaseBdev1",
00:18:36.756        "uuid": "b6368825-0dc3-4803-878f-23f129048016",
00:18:36.756        "is_configured": true,
00:18:36.756        "data_offset": 2048,
00:18:36.756        "data_size": 63488
00:18:36.756      },
00:18:36.756      {
00:18:36.756        "name": "BaseBdev2",
00:18:36.756        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:36.756        "is_configured": false,
00:18:36.756        "data_offset": 0,
00:18:36.756        "data_size": 0
00:18:36.756      },
00:18:36.756      {
00:18:36.756        "name": "BaseBdev3",
00:18:36.756        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:36.756        "is_configured": false,
00:18:36.756        "data_offset": 0,
00:18:36.756        "data_size": 0
00:18:36.756      },
00:18:36.756      {
00:18:36.756        "name": "BaseBdev4",
00:18:36.756        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:36.756        "is_configured": false,
00:18:36.756        "data_offset": 0,
00:18:36.756        "data_size": 0
00:18:36.756      }
00:18:36.756    ]
00:18:36.756  }'
00:18:36.756   23:52:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:36.756   23:52:07	-- common/autotest_common.sh@10 -- # set +x
00:18:37.322   23:52:07	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:18:37.322  [2024-12-13 23:52:08.004939] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:37.322  [2024-12-13 23:52:08.005093] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:18:37.322   23:52:08	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:18:37.322   23:52:08	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:37.889   23:52:08	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:18:37.889  BaseBdev1
00:18:37.889   23:52:08	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:18:37.889   23:52:08	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:18:37.889   23:52:08	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:37.889   23:52:08	-- common/autotest_common.sh@899 -- # local i
00:18:37.889   23:52:08	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:37.889   23:52:08	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:37.889   23:52:08	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:38.147   23:52:08	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:18:38.406  [
00:18:38.406    {
00:18:38.406      "name": "BaseBdev1",
00:18:38.406      "aliases": [
00:18:38.406        "2ed538ca-d665-4daa-80fd-0030c11e2c12"
00:18:38.406      ],
00:18:38.406      "product_name": "Malloc disk",
00:18:38.406      "block_size": 512,
00:18:38.406      "num_blocks": 65536,
00:18:38.406      "uuid": "2ed538ca-d665-4daa-80fd-0030c11e2c12",
00:18:38.406      "assigned_rate_limits": {
00:18:38.406        "rw_ios_per_sec": 0,
00:18:38.406        "rw_mbytes_per_sec": 0,
00:18:38.406        "r_mbytes_per_sec": 0,
00:18:38.406        "w_mbytes_per_sec": 0
00:18:38.406      },
00:18:38.406      "claimed": false,
00:18:38.406      "zoned": false,
00:18:38.406      "supported_io_types": {
00:18:38.406        "read": true,
00:18:38.406        "write": true,
00:18:38.406        "unmap": true,
00:18:38.406        "write_zeroes": true,
00:18:38.406        "flush": true,
00:18:38.406        "reset": true,
00:18:38.406        "compare": false,
00:18:38.406        "compare_and_write": false,
00:18:38.406        "abort": true,
00:18:38.406        "nvme_admin": false,
00:18:38.406        "nvme_io": false
00:18:38.406      },
00:18:38.406      "memory_domains": [
00:18:38.406        {
00:18:38.406          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:38.406          "dma_device_type": 2
00:18:38.406        }
00:18:38.406      ],
00:18:38.406      "driver_specific": {}
00:18:38.406    }
00:18:38.406  ]
00:18:38.406   23:52:08	-- common/autotest_common.sh@905 -- # return 0
00:18:38.406   23:52:08	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:38.665  [2024-12-13 23:52:09.152645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:38.665  [2024-12-13 23:52:09.154483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:38.665  [2024-12-13 23:52:09.155031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:38.665  [2024-12-13 23:52:09.155197] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:38.665  [2024-12-13 23:52:09.155357] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:38.665  [2024-12-13 23:52:09.155554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:38.665  [2024-12-13 23:52:09.155758] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:38.665    23:52:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:38.665    23:52:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:38.665    "name": "Existed_Raid",
00:18:38.665    "uuid": "2e6ae6a5-ec0d-43fe-8759-f7d56d79a0c5",
00:18:38.665    "strip_size_kb": 0,
00:18:38.665    "state": "configuring",
00:18:38.665    "raid_level": "raid1",
00:18:38.665    "superblock": true,
00:18:38.665    "num_base_bdevs": 4,
00:18:38.665    "num_base_bdevs_discovered": 1,
00:18:38.665    "num_base_bdevs_operational": 4,
00:18:38.665    "base_bdevs_list": [
00:18:38.665      {
00:18:38.665        "name": "BaseBdev1",
00:18:38.665        "uuid": "2ed538ca-d665-4daa-80fd-0030c11e2c12",
00:18:38.665        "is_configured": true,
00:18:38.665        "data_offset": 2048,
00:18:38.665        "data_size": 63488
00:18:38.665      },
00:18:38.665      {
00:18:38.665        "name": "BaseBdev2",
00:18:38.665        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:38.665        "is_configured": false,
00:18:38.665        "data_offset": 0,
00:18:38.665        "data_size": 0
00:18:38.665      },
00:18:38.665      {
00:18:38.665        "name": "BaseBdev3",
00:18:38.665        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:38.665        "is_configured": false,
00:18:38.665        "data_offset": 0,
00:18:38.665        "data_size": 0
00:18:38.665      },
00:18:38.665      {
00:18:38.665        "name": "BaseBdev4",
00:18:38.665        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:38.665        "is_configured": false,
00:18:38.665        "data_offset": 0,
00:18:38.665        "data_size": 0
00:18:38.665      }
00:18:38.665    ]
00:18:38.665  }'
00:18:38.665   23:52:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:38.665   23:52:09	-- common/autotest_common.sh@10 -- # set +x
00:18:39.232   23:52:09	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:18:39.491  [2024-12-13 23:52:10.208392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:39.491  BaseBdev2
00:18:39.491   23:52:10	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:18:39.491   23:52:10	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:18:39.749   23:52:10	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:39.749   23:52:10	-- common/autotest_common.sh@899 -- # local i
00:18:39.749   23:52:10	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:39.749   23:52:10	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:39.749   23:52:10	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:39.749   23:52:10	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:18:40.007  [
00:18:40.007    {
00:18:40.007      "name": "BaseBdev2",
00:18:40.007      "aliases": [
00:18:40.007        "beeb1080-9c88-4f72-a480-4a781a95bbdf"
00:18:40.007      ],
00:18:40.007      "product_name": "Malloc disk",
00:18:40.007      "block_size": 512,
00:18:40.007      "num_blocks": 65536,
00:18:40.007      "uuid": "beeb1080-9c88-4f72-a480-4a781a95bbdf",
00:18:40.007      "assigned_rate_limits": {
00:18:40.007        "rw_ios_per_sec": 0,
00:18:40.007        "rw_mbytes_per_sec": 0,
00:18:40.007        "r_mbytes_per_sec": 0,
00:18:40.007        "w_mbytes_per_sec": 0
00:18:40.007      },
00:18:40.007      "claimed": true,
00:18:40.007      "claim_type": "exclusive_write",
00:18:40.007      "zoned": false,
00:18:40.007      "supported_io_types": {
00:18:40.007        "read": true,
00:18:40.007        "write": true,
00:18:40.007        "unmap": true,
00:18:40.007        "write_zeroes": true,
00:18:40.007        "flush": true,
00:18:40.007        "reset": true,
00:18:40.007        "compare": false,
00:18:40.007        "compare_and_write": false,
00:18:40.007        "abort": true,
00:18:40.007        "nvme_admin": false,
00:18:40.007        "nvme_io": false
00:18:40.007      },
00:18:40.007      "memory_domains": [
00:18:40.007        {
00:18:40.007          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:40.007          "dma_device_type": 2
00:18:40.007        }
00:18:40.007      ],
00:18:40.007      "driver_specific": {}
00:18:40.007    }
00:18:40.007  ]
00:18:40.007   23:52:10	-- common/autotest_common.sh@905 -- # return 0
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:40.007   23:52:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:40.007    23:52:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:40.007    23:52:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:40.265   23:52:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:40.265    "name": "Existed_Raid",
00:18:40.265    "uuid": "2e6ae6a5-ec0d-43fe-8759-f7d56d79a0c5",
00:18:40.265    "strip_size_kb": 0,
00:18:40.265    "state": "configuring",
00:18:40.265    "raid_level": "raid1",
00:18:40.265    "superblock": true,
00:18:40.265    "num_base_bdevs": 4,
00:18:40.265    "num_base_bdevs_discovered": 2,
00:18:40.265    "num_base_bdevs_operational": 4,
00:18:40.265    "base_bdevs_list": [
00:18:40.265      {
00:18:40.265        "name": "BaseBdev1",
00:18:40.265        "uuid": "2ed538ca-d665-4daa-80fd-0030c11e2c12",
00:18:40.265        "is_configured": true,
00:18:40.265        "data_offset": 2048,
00:18:40.265        "data_size": 63488
00:18:40.265      },
00:18:40.265      {
00:18:40.265        "name": "BaseBdev2",
00:18:40.265        "uuid": "beeb1080-9c88-4f72-a480-4a781a95bbdf",
00:18:40.265        "is_configured": true,
00:18:40.265        "data_offset": 2048,
00:18:40.265        "data_size": 63488
00:18:40.265      },
00:18:40.265      {
00:18:40.265        "name": "BaseBdev3",
00:18:40.265        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:40.265        "is_configured": false,
00:18:40.265        "data_offset": 0,
00:18:40.265        "data_size": 0
00:18:40.265      },
00:18:40.265      {
00:18:40.265        "name": "BaseBdev4",
00:18:40.265        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:40.265        "is_configured": false,
00:18:40.265        "data_offset": 0,
00:18:40.265        "data_size": 0
00:18:40.265      }
00:18:40.265    ]
00:18:40.265  }'
00:18:40.265   23:52:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:40.265   23:52:10	-- common/autotest_common.sh@10 -- # set +x
00:18:40.831   23:52:11	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:18:41.090  [2024-12-13 23:52:11.745291] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:18:41.090  BaseBdev3
00:18:41.090   23:52:11	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:18:41.090   23:52:11	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:18:41.090   23:52:11	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:41.090   23:52:11	-- common/autotest_common.sh@899 -- # local i
00:18:41.090   23:52:11	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:41.090   23:52:11	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:41.090   23:52:11	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:41.348   23:52:11	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:18:41.607  [
00:18:41.607    {
00:18:41.607      "name": "BaseBdev3",
00:18:41.607      "aliases": [
00:18:41.607        "cef8b107-40fd-4b8d-a867-79756c043bc2"
00:18:41.607      ],
00:18:41.607      "product_name": "Malloc disk",
00:18:41.607      "block_size": 512,
00:18:41.607      "num_blocks": 65536,
00:18:41.607      "uuid": "cef8b107-40fd-4b8d-a867-79756c043bc2",
00:18:41.607      "assigned_rate_limits": {
00:18:41.607        "rw_ios_per_sec": 0,
00:18:41.607        "rw_mbytes_per_sec": 0,
00:18:41.607        "r_mbytes_per_sec": 0,
00:18:41.607        "w_mbytes_per_sec": 0
00:18:41.607      },
00:18:41.607      "claimed": true,
00:18:41.607      "claim_type": "exclusive_write",
00:18:41.607      "zoned": false,
00:18:41.607      "supported_io_types": {
00:18:41.607        "read": true,
00:18:41.607        "write": true,
00:18:41.607        "unmap": true,
00:18:41.607        "write_zeroes": true,
00:18:41.607        "flush": true,
00:18:41.607        "reset": true,
00:18:41.607        "compare": false,
00:18:41.607        "compare_and_write": false,
00:18:41.607        "abort": true,
00:18:41.607        "nvme_admin": false,
00:18:41.607        "nvme_io": false
00:18:41.607      },
00:18:41.607      "memory_domains": [
00:18:41.607        {
00:18:41.607          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:41.607          "dma_device_type": 2
00:18:41.607        }
00:18:41.607      ],
00:18:41.607      "driver_specific": {}
00:18:41.607    }
00:18:41.607  ]
00:18:41.607   23:52:12	-- common/autotest_common.sh@905 -- # return 0
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:41.607   23:52:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:41.607    23:52:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:41.607    23:52:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:41.865   23:52:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:41.865    "name": "Existed_Raid",
00:18:41.865    "uuid": "2e6ae6a5-ec0d-43fe-8759-f7d56d79a0c5",
00:18:41.865    "strip_size_kb": 0,
00:18:41.865    "state": "configuring",
00:18:41.865    "raid_level": "raid1",
00:18:41.865    "superblock": true,
00:18:41.865    "num_base_bdevs": 4,
00:18:41.865    "num_base_bdevs_discovered": 3,
00:18:41.866    "num_base_bdevs_operational": 4,
00:18:41.866    "base_bdevs_list": [
00:18:41.866      {
00:18:41.866        "name": "BaseBdev1",
00:18:41.866        "uuid": "2ed538ca-d665-4daa-80fd-0030c11e2c12",
00:18:41.866        "is_configured": true,
00:18:41.866        "data_offset": 2048,
00:18:41.866        "data_size": 63488
00:18:41.866      },
00:18:41.866      {
00:18:41.866        "name": "BaseBdev2",
00:18:41.866        "uuid": "beeb1080-9c88-4f72-a480-4a781a95bbdf",
00:18:41.866        "is_configured": true,
00:18:41.866        "data_offset": 2048,
00:18:41.866        "data_size": 63488
00:18:41.866      },
00:18:41.866      {
00:18:41.866        "name": "BaseBdev3",
00:18:41.866        "uuid": "cef8b107-40fd-4b8d-a867-79756c043bc2",
00:18:41.866        "is_configured": true,
00:18:41.866        "data_offset": 2048,
00:18:41.866        "data_size": 63488
00:18:41.866      },
00:18:41.866      {
00:18:41.866        "name": "BaseBdev4",
00:18:41.866        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:41.866        "is_configured": false,
00:18:41.866        "data_offset": 0,
00:18:41.866        "data_size": 0
00:18:41.866      }
00:18:41.866    ]
00:18:41.866  }'
00:18:41.866   23:52:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:41.866   23:52:12	-- common/autotest_common.sh@10 -- # set +x
00:18:42.433   23:52:13	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:18:42.693  [2024-12-13 23:52:13.274762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:18:42.693  [2024-12-13 23:52:13.275019] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:18:42.693  [2024-12-13 23:52:13.275034] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:42.693  [2024-12-13 23:52:13.275153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:18:42.693  [2024-12-13 23:52:13.275505] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:18:42.693  [2024-12-13 23:52:13.275527] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:18:42.693  [2024-12-13 23:52:13.275712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:42.693  BaseBdev4
00:18:42.693   23:52:13	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:18:42.693   23:52:13	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:18:42.693   23:52:13	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:42.693   23:52:13	-- common/autotest_common.sh@899 -- # local i
00:18:42.693   23:52:13	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:42.693   23:52:13	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:42.693   23:52:13	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:42.951   23:52:13	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:18:43.209  [
00:18:43.209    {
00:18:43.209      "name": "BaseBdev4",
00:18:43.209      "aliases": [
00:18:43.209        "853c48a7-4f7f-40d0-878b-76bd5e71681b"
00:18:43.209      ],
00:18:43.209      "product_name": "Malloc disk",
00:18:43.209      "block_size": 512,
00:18:43.209      "num_blocks": 65536,
00:18:43.209      "uuid": "853c48a7-4f7f-40d0-878b-76bd5e71681b",
00:18:43.209      "assigned_rate_limits": {
00:18:43.209        "rw_ios_per_sec": 0,
00:18:43.209        "rw_mbytes_per_sec": 0,
00:18:43.209        "r_mbytes_per_sec": 0,
00:18:43.209        "w_mbytes_per_sec": 0
00:18:43.209      },
00:18:43.209      "claimed": true,
00:18:43.209      "claim_type": "exclusive_write",
00:18:43.209      "zoned": false,
00:18:43.209      "supported_io_types": {
00:18:43.209        "read": true,
00:18:43.209        "write": true,
00:18:43.209        "unmap": true,
00:18:43.209        "write_zeroes": true,
00:18:43.209        "flush": true,
00:18:43.209        "reset": true,
00:18:43.209        "compare": false,
00:18:43.209        "compare_and_write": false,
00:18:43.209        "abort": true,
00:18:43.209        "nvme_admin": false,
00:18:43.209        "nvme_io": false
00:18:43.209      },
00:18:43.209      "memory_domains": [
00:18:43.209        {
00:18:43.209          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:43.209          "dma_device_type": 2
00:18:43.209        }
00:18:43.209      ],
00:18:43.209      "driver_specific": {}
00:18:43.209    }
00:18:43.209  ]
00:18:43.209   23:52:13	-- common/autotest_common.sh@905 -- # return 0
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:43.209   23:52:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:43.209    23:52:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:43.209    23:52:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:43.468   23:52:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:43.468    "name": "Existed_Raid",
00:18:43.468    "uuid": "2e6ae6a5-ec0d-43fe-8759-f7d56d79a0c5",
00:18:43.468    "strip_size_kb": 0,
00:18:43.468    "state": "online",
00:18:43.468    "raid_level": "raid1",
00:18:43.468    "superblock": true,
00:18:43.468    "num_base_bdevs": 4,
00:18:43.468    "num_base_bdevs_discovered": 4,
00:18:43.468    "num_base_bdevs_operational": 4,
00:18:43.468    "base_bdevs_list": [
00:18:43.468      {
00:18:43.468        "name": "BaseBdev1",
00:18:43.468        "uuid": "2ed538ca-d665-4daa-80fd-0030c11e2c12",
00:18:43.468        "is_configured": true,
00:18:43.468        "data_offset": 2048,
00:18:43.468        "data_size": 63488
00:18:43.468      },
00:18:43.468      {
00:18:43.468        "name": "BaseBdev2",
00:18:43.468        "uuid": "beeb1080-9c88-4f72-a480-4a781a95bbdf",
00:18:43.468        "is_configured": true,
00:18:43.468        "data_offset": 2048,
00:18:43.468        "data_size": 63488
00:18:43.468      },
00:18:43.468      {
00:18:43.468        "name": "BaseBdev3",
00:18:43.468        "uuid": "cef8b107-40fd-4b8d-a867-79756c043bc2",
00:18:43.468        "is_configured": true,
00:18:43.468        "data_offset": 2048,
00:18:43.468        "data_size": 63488
00:18:43.468      },
00:18:43.468      {
00:18:43.468        "name": "BaseBdev4",
00:18:43.468        "uuid": "853c48a7-4f7f-40d0-878b-76bd5e71681b",
00:18:43.468        "is_configured": true,
00:18:43.468        "data_offset": 2048,
00:18:43.468        "data_size": 63488
00:18:43.468      }
00:18:43.468    ]
00:18:43.468  }'
00:18:43.468   23:52:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:43.468   23:52:13	-- common/autotest_common.sh@10 -- # set +x
00:18:44.035   23:52:14	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:44.294  [2024-12-13 23:52:14.814950] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@196 -- # return 0
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:44.294   23:52:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:44.294    23:52:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:44.294    23:52:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:44.551   23:52:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:44.551    "name": "Existed_Raid",
00:18:44.551    "uuid": "2e6ae6a5-ec0d-43fe-8759-f7d56d79a0c5",
00:18:44.551    "strip_size_kb": 0,
00:18:44.551    "state": "online",
00:18:44.551    "raid_level": "raid1",
00:18:44.551    "superblock": true,
00:18:44.551    "num_base_bdevs": 4,
00:18:44.551    "num_base_bdevs_discovered": 3,
00:18:44.551    "num_base_bdevs_operational": 3,
00:18:44.551    "base_bdevs_list": [
00:18:44.551      {
00:18:44.551        "name": null,
00:18:44.551        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:44.551        "is_configured": false,
00:18:44.551        "data_offset": 2048,
00:18:44.551        "data_size": 63488
00:18:44.551      },
00:18:44.551      {
00:18:44.551        "name": "BaseBdev2",
00:18:44.551        "uuid": "beeb1080-9c88-4f72-a480-4a781a95bbdf",
00:18:44.551        "is_configured": true,
00:18:44.551        "data_offset": 2048,
00:18:44.551        "data_size": 63488
00:18:44.551      },
00:18:44.551      {
00:18:44.551        "name": "BaseBdev3",
00:18:44.551        "uuid": "cef8b107-40fd-4b8d-a867-79756c043bc2",
00:18:44.551        "is_configured": true,
00:18:44.551        "data_offset": 2048,
00:18:44.551        "data_size": 63488
00:18:44.551      },
00:18:44.551      {
00:18:44.551        "name": "BaseBdev4",
00:18:44.551        "uuid": "853c48a7-4f7f-40d0-878b-76bd5e71681b",
00:18:44.551        "is_configured": true,
00:18:44.551        "data_offset": 2048,
00:18:44.551        "data_size": 63488
00:18:44.551      }
00:18:44.551    ]
00:18:44.551  }'
00:18:44.551   23:52:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:44.551   23:52:15	-- common/autotest_common.sh@10 -- # set +x
00:18:45.117   23:52:15	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:18:45.117   23:52:15	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:45.117    23:52:15	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:45.117    23:52:15	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:45.376   23:52:16	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:45.376   23:52:16	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:45.376   23:52:16	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:18:45.634  [2024-12-13 23:52:16.195476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:18:45.634   23:52:16	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:45.634   23:52:16	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:45.634    23:52:16	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:45.634    23:52:16	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:45.893   23:52:16	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:45.893   23:52:16	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:45.893   23:52:16	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:18:46.151  [2024-12-13 23:52:16.678514] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:18:46.151   23:52:16	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:46.151   23:52:16	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:46.151    23:52:16	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:46.151    23:52:16	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:46.410   23:52:16	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:46.410   23:52:16	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:46.410   23:52:16	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:18:46.410  [2024-12-13 23:52:17.129985] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:18:46.410  [2024-12-13 23:52:17.130020] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:46.410  [2024-12-13 23:52:17.130085] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:46.668  [2024-12-13 23:52:17.196392] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:46.668  [2024-12-13 23:52:17.196423] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:18:46.668   23:52:17	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:46.668   23:52:17	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:46.668    23:52:17	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:46.668    23:52:17	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:18:46.927   23:52:17	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:18:46.927   23:52:17	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:18:46.927   23:52:17	-- bdev/bdev_raid.sh@287 -- # killprocess 121177
00:18:46.927   23:52:17	-- common/autotest_common.sh@936 -- # '[' -z 121177 ']'
00:18:46.927   23:52:17	-- common/autotest_common.sh@940 -- # kill -0 121177
00:18:46.927    23:52:17	-- common/autotest_common.sh@941 -- # uname
00:18:46.927   23:52:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:46.927    23:52:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121177
00:18:46.927   23:52:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:46.927  killing process with pid 121177
00:18:46.927   23:52:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:46.927   23:52:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 121177'
00:18:46.927   23:52:17	-- common/autotest_common.sh@955 -- # kill 121177
00:18:46.927  [2024-12-13 23:52:17.466105] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:46.927   23:52:17	-- common/autotest_common.sh@960 -- # wait 121177
00:18:46.927  [2024-12-13 23:52:17.466207] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@289 -- # return 0
00:18:47.864  
00:18:47.864  real	0m14.829s
00:18:47.864  user	0m26.380s
00:18:47.864  sys	0m1.710s
00:18:47.864   23:52:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:47.864  ************************************
00:18:47.864  END TEST raid_state_function_test_sb
00:18:47.864  ************************************
00:18:47.864   23:52:18	-- common/autotest_common.sh@10 -- # set +x
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4
00:18:47.864   23:52:18	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:18:47.864   23:52:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:47.864   23:52:18	-- common/autotest_common.sh@10 -- # set +x
00:18:47.864  ************************************
00:18:47.864  START TEST raid_superblock_test
00:18:47.864  ************************************
00:18:47.864   23:52:18	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@357 -- # raid_pid=121631
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@358 -- # waitforlisten 121631 /var/tmp/spdk-raid.sock
00:18:47.864   23:52:18	-- common/autotest_common.sh@829 -- # '[' -z 121631 ']'
00:18:47.864   23:52:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:47.864   23:52:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:47.864  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:47.864   23:52:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:47.864   23:52:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:47.864   23:52:18	-- common/autotest_common.sh@10 -- # set +x
00:18:47.864   23:52:18	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:18:48.123  [2024-12-13 23:52:18.615236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:18:48.123  [2024-12-13 23:52:18.616116] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121631 ]
00:18:48.123  [2024-12-13 23:52:18.787334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:48.381  [2024-12-13 23:52:18.999643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:48.655  [2024-12-13 23:52:19.187700] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:48.928   23:52:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:48.928   23:52:19	-- common/autotest_common.sh@862 -- # return 0
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:48.928   23:52:19	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:18:49.186  malloc1
00:18:49.186   23:52:19	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:49.445  [2024-12-13 23:52:19.923599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:49.445  [2024-12-13 23:52:19.923691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:49.445  [2024-12-13 23:52:19.923723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:18:49.445  [2024-12-13 23:52:19.923770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:49.445  [2024-12-13 23:52:19.925989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:49.445  [2024-12-13 23:52:19.926030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:49.445  pt1
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:49.445   23:52:19	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:18:49.704  malloc2
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:49.704  [2024-12-13 23:52:20.385573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:49.704  [2024-12-13 23:52:20.385650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:49.704  [2024-12-13 23:52:20.385709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:18:49.704  [2024-12-13 23:52:20.385770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:49.704  [2024-12-13 23:52:20.388007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:49.704  [2024-12-13 23:52:20.388050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:49.704  pt2
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:49.704   23:52:20	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:18:49.963  malloc3
00:18:49.963   23:52:20	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:50.222  [2024-12-13 23:52:20.858431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:50.222  [2024-12-13 23:52:20.858505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:50.222  [2024-12-13 23:52:20.858550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:18:50.222  [2024-12-13 23:52:20.858594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:50.222  [2024-12-13 23:52:20.860655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:50.222  [2024-12-13 23:52:20.860702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:50.222  pt3
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:50.222   23:52:20	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:18:50.484  malloc4
00:18:50.484   23:52:21	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:50.744  [2024-12-13 23:52:21.254806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:50.744  [2024-12-13 23:52:21.254867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:50.744  [2024-12-13 23:52:21.254900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:18:50.744  [2024-12-13 23:52:21.254941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:50.744  [2024-12-13 23:52:21.257132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:50.744  [2024-12-13 23:52:21.257178] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:50.744  pt4
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:18:50.744  [2024-12-13 23:52:21.438893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:50.744  [2024-12-13 23:52:21.440770] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:50.744  [2024-12-13 23:52:21.440849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:50.744  [2024-12-13 23:52:21.440904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:50.744  [2024-12-13 23:52:21.441107] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380
00:18:50.744  [2024-12-13 23:52:21.441135] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:50.744  [2024-12-13 23:52:21.441269] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:18:50.744  [2024-12-13 23:52:21.441629] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380
00:18:50.744  [2024-12-13 23:52:21.441649] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380
00:18:50.744  [2024-12-13 23:52:21.441782] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:50.744   23:52:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:50.744    23:52:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:50.744    23:52:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:51.002   23:52:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:51.002    "name": "raid_bdev1",
00:18:51.002    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:18:51.002    "strip_size_kb": 0,
00:18:51.002    "state": "online",
00:18:51.002    "raid_level": "raid1",
00:18:51.002    "superblock": true,
00:18:51.002    "num_base_bdevs": 4,
00:18:51.002    "num_base_bdevs_discovered": 4,
00:18:51.002    "num_base_bdevs_operational": 4,
00:18:51.002    "base_bdevs_list": [
00:18:51.002      {
00:18:51.002        "name": "pt1",
00:18:51.002        "uuid": "dede03aa-cae5-5009-8a6d-2d8c57ccbc67",
00:18:51.002        "is_configured": true,
00:18:51.002        "data_offset": 2048,
00:18:51.002        "data_size": 63488
00:18:51.002      },
00:18:51.002      {
00:18:51.002        "name": "pt2",
00:18:51.002        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:18:51.002        "is_configured": true,
00:18:51.002        "data_offset": 2048,
00:18:51.002        "data_size": 63488
00:18:51.002      },
00:18:51.002      {
00:18:51.002        "name": "pt3",
00:18:51.002        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:18:51.002        "is_configured": true,
00:18:51.002        "data_offset": 2048,
00:18:51.002        "data_size": 63488
00:18:51.002      },
00:18:51.002      {
00:18:51.002        "name": "pt4",
00:18:51.002        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:18:51.002        "is_configured": true,
00:18:51.002        "data_offset": 2048,
00:18:51.002        "data_size": 63488
00:18:51.002      }
00:18:51.002    ]
00:18:51.002  }'
00:18:51.002   23:52:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:51.002   23:52:21	-- common/autotest_common.sh@10 -- # set +x
00:18:51.569    23:52:22	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:51.569    23:52:22	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:18:51.828  [2024-12-13 23:52:22.499180] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:51.828   23:52:22	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0644cde5-6feb-470a-a3a0-db132ed4a99e
00:18:51.828   23:52:22	-- bdev/bdev_raid.sh@380 -- # '[' -z 0644cde5-6feb-470a-a3a0-db132ed4a99e ']'
00:18:51.828   23:52:22	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:52.086  [2024-12-13 23:52:22.687036] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:52.086  [2024-12-13 23:52:22.687057] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:52.086  [2024-12-13 23:52:22.687111] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:52.086  [2024-12-13 23:52:22.687182] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:52.086  [2024-12-13 23:52:22.687193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline
00:18:52.086    23:52:22	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:52.086    23:52:22	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:18:52.345   23:52:22	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:18:52.345   23:52:22	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:18:52.345   23:52:22	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:52.345   23:52:22	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:18:52.345   23:52:23	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:52.345   23:52:23	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:52.604   23:52:23	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:52.604   23:52:23	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:52.863   23:52:23	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:52.863   23:52:23	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:18:53.121    23:52:23	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:18:53.121    23:52:23	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:18:53.380   23:52:23	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:18:53.380   23:52:23	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:53.380   23:52:23	-- common/autotest_common.sh@650 -- # local es=0
00:18:53.380   23:52:23	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:53.380   23:52:23	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:53.380   23:52:23	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:53.380    23:52:23	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:53.380   23:52:23	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:53.380    23:52:23	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:53.380   23:52:23	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:53.380   23:52:23	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:53.380   23:52:23	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:18:53.380   23:52:23	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:53.639  [2024-12-13 23:52:24.139234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:18:53.639  [2024-12-13 23:52:24.141134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:18:53.639  [2024-12-13 23:52:24.141196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:18:53.639  [2024-12-13 23:52:24.141231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:18:53.639  [2024-12-13 23:52:24.141276] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:18:53.639  [2024-12-13 23:52:24.141349] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:18:53.639  [2024-12-13 23:52:24.141384] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:18:53.639  [2024-12-13 23:52:24.141475] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:18:53.639  [2024-12-13 23:52:24.141503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:53.639  [2024-12-13 23:52:24.141514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring
00:18:53.639  request:
00:18:53.639  {
00:18:53.639    "name": "raid_bdev1",
00:18:53.639    "raid_level": "raid1",
00:18:53.639    "base_bdevs": [
00:18:53.639      "malloc1",
00:18:53.639      "malloc2",
00:18:53.639      "malloc3",
00:18:53.639      "malloc4"
00:18:53.639    ],
00:18:53.639    "superblock": false,
00:18:53.639    "method": "bdev_raid_create",
00:18:53.639    "req_id": 1
00:18:53.639  }
00:18:53.639  Got JSON-RPC error response
00:18:53.639  response:
00:18:53.639  {
00:18:53.639    "code": -17,
00:18:53.639    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:18:53.639  }
00:18:53.639   23:52:24	-- common/autotest_common.sh@653 -- # es=1
00:18:53.639   23:52:24	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:18:53.639   23:52:24	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:18:53.639   23:52:24	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:18:53.639    23:52:24	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:18:53.639    23:52:24	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:53.898  [2024-12-13 23:52:24.571262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:53.898  [2024-12-13 23:52:24.571325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:53.898  [2024-12-13 23:52:24.571357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:18:53.898  [2024-12-13 23:52:24.571383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:53.898  [2024-12-13 23:52:24.573674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:53.898  [2024-12-13 23:52:24.573757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:53.898  [2024-12-13 23:52:24.573846] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:18:53.898  [2024-12-13 23:52:24.573901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:53.898  pt1
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:53.898   23:52:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:53.898    23:52:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:53.898    23:52:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:54.156   23:52:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:54.156    "name": "raid_bdev1",
00:18:54.156    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:18:54.156    "strip_size_kb": 0,
00:18:54.156    "state": "configuring",
00:18:54.156    "raid_level": "raid1",
00:18:54.156    "superblock": true,
00:18:54.156    "num_base_bdevs": 4,
00:18:54.156    "num_base_bdevs_discovered": 1,
00:18:54.156    "num_base_bdevs_operational": 4,
00:18:54.156    "base_bdevs_list": [
00:18:54.156      {
00:18:54.156        "name": "pt1",
00:18:54.156        "uuid": "dede03aa-cae5-5009-8a6d-2d8c57ccbc67",
00:18:54.156        "is_configured": true,
00:18:54.156        "data_offset": 2048,
00:18:54.156        "data_size": 63488
00:18:54.156      },
00:18:54.156      {
00:18:54.156        "name": null,
00:18:54.156        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:18:54.156        "is_configured": false,
00:18:54.156        "data_offset": 2048,
00:18:54.156        "data_size": 63488
00:18:54.156      },
00:18:54.156      {
00:18:54.156        "name": null,
00:18:54.156        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:18:54.156        "is_configured": false,
00:18:54.156        "data_offset": 2048,
00:18:54.156        "data_size": 63488
00:18:54.156      },
00:18:54.156      {
00:18:54.156        "name": null,
00:18:54.156        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:18:54.156        "is_configured": false,
00:18:54.156        "data_offset": 2048,
00:18:54.156        "data_size": 63488
00:18:54.156      }
00:18:54.156    ]
00:18:54.156  }'
00:18:54.156   23:52:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:54.156   23:52:24	-- common/autotest_common.sh@10 -- # set +x
00:18:54.724   23:52:25	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:18:54.724   23:52:25	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:54.982  [2024-12-13 23:52:25.579441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:54.982  [2024-12-13 23:52:25.579512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:54.982  [2024-12-13 23:52:25.579562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:18:54.982  [2024-12-13 23:52:25.579586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:54.982  [2024-12-13 23:52:25.580050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:54.982  [2024-12-13 23:52:25.580102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:54.982  [2024-12-13 23:52:25.580208] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:54.982  [2024-12-13 23:52:25.580254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:54.982  pt2
00:18:54.982   23:52:25	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:55.241  [2024-12-13 23:52:25.767476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:55.241   23:52:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:55.241    23:52:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:55.241    23:52:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:55.500   23:52:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:55.500    "name": "raid_bdev1",
00:18:55.500    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:18:55.500    "strip_size_kb": 0,
00:18:55.500    "state": "configuring",
00:18:55.500    "raid_level": "raid1",
00:18:55.500    "superblock": true,
00:18:55.500    "num_base_bdevs": 4,
00:18:55.500    "num_base_bdevs_discovered": 1,
00:18:55.500    "num_base_bdevs_operational": 4,
00:18:55.500    "base_bdevs_list": [
00:18:55.500      {
00:18:55.500        "name": "pt1",
00:18:55.500        "uuid": "dede03aa-cae5-5009-8a6d-2d8c57ccbc67",
00:18:55.500        "is_configured": true,
00:18:55.500        "data_offset": 2048,
00:18:55.500        "data_size": 63488
00:18:55.500      },
00:18:55.500      {
00:18:55.500        "name": null,
00:18:55.500        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:18:55.500        "is_configured": false,
00:18:55.500        "data_offset": 2048,
00:18:55.500        "data_size": 63488
00:18:55.500      },
00:18:55.500      {
00:18:55.500        "name": null,
00:18:55.500        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:18:55.500        "is_configured": false,
00:18:55.500        "data_offset": 2048,
00:18:55.500        "data_size": 63488
00:18:55.500      },
00:18:55.500      {
00:18:55.500        "name": null,
00:18:55.500        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:18:55.500        "is_configured": false,
00:18:55.500        "data_offset": 2048,
00:18:55.500        "data_size": 63488
00:18:55.500      }
00:18:55.500    ]
00:18:55.500  }'
00:18:55.500   23:52:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:55.500   23:52:26	-- common/autotest_common.sh@10 -- # set +x
00:18:56.067   23:52:26	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:18:56.067   23:52:26	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:56.067   23:52:26	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:56.326  [2024-12-13 23:52:26.895683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:56.326  [2024-12-13 23:52:26.895735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:56.326  [2024-12-13 23:52:26.895765] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:18:56.326  [2024-12-13 23:52:26.895784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:56.326  [2024-12-13 23:52:26.896177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:56.326  [2024-12-13 23:52:26.896237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:56.326  [2024-12-13 23:52:26.896316] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:56.326  [2024-12-13 23:52:26.896336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:56.326  pt2
00:18:56.326   23:52:26	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:56.326   23:52:26	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:56.326   23:52:26	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:56.584  [2024-12-13 23:52:27.131719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:56.584  [2024-12-13 23:52:27.131778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:56.584  [2024-12-13 23:52:27.131809] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:18:56.585  [2024-12-13 23:52:27.131836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:56.585  [2024-12-13 23:52:27.132241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:56.585  [2024-12-13 23:52:27.132302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:56.585  [2024-12-13 23:52:27.132379] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:18:56.585  [2024-12-13 23:52:27.132415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:56.585  pt3
00:18:56.585   23:52:27	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:56.585   23:52:27	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:56.585   23:52:27	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:56.843  [2024-12-13 23:52:27.323764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:56.843  [2024-12-13 23:52:27.323838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:56.843  [2024-12-13 23:52:27.323865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:18:56.843  [2024-12-13 23:52:27.323891] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:56.843  [2024-12-13 23:52:27.324280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:56.843  [2024-12-13 23:52:27.324339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:56.843  [2024-12-13 23:52:27.324440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:18:56.843  [2024-12-13 23:52:27.324476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:56.843  [2024-12-13 23:52:27.324610] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:18:56.843  [2024-12-13 23:52:27.324632] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:56.843  [2024-12-13 23:52:27.324733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:18:56.843  [2024-12-13 23:52:27.325080] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:18:56.843  [2024-12-13 23:52:27.325101] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:18:56.843  [2024-12-13 23:52:27.325235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:56.843  pt4
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:56.843   23:52:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:56.844    23:52:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:56.844    23:52:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:57.102   23:52:27	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:57.102    "name": "raid_bdev1",
00:18:57.102    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:18:57.102    "strip_size_kb": 0,
00:18:57.102    "state": "online",
00:18:57.102    "raid_level": "raid1",
00:18:57.102    "superblock": true,
00:18:57.102    "num_base_bdevs": 4,
00:18:57.102    "num_base_bdevs_discovered": 4,
00:18:57.102    "num_base_bdevs_operational": 4,
00:18:57.102    "base_bdevs_list": [
00:18:57.102      {
00:18:57.102        "name": "pt1",
00:18:57.102        "uuid": "dede03aa-cae5-5009-8a6d-2d8c57ccbc67",
00:18:57.102        "is_configured": true,
00:18:57.102        "data_offset": 2048,
00:18:57.102        "data_size": 63488
00:18:57.102      },
00:18:57.102      {
00:18:57.102        "name": "pt2",
00:18:57.102        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:18:57.102        "is_configured": true,
00:18:57.102        "data_offset": 2048,
00:18:57.102        "data_size": 63488
00:18:57.102      },
00:18:57.102      {
00:18:57.102        "name": "pt3",
00:18:57.102        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:18:57.102        "is_configured": true,
00:18:57.102        "data_offset": 2048,
00:18:57.102        "data_size": 63488
00:18:57.102      },
00:18:57.102      {
00:18:57.102        "name": "pt4",
00:18:57.102        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:18:57.102        "is_configured": true,
00:18:57.102        "data_offset": 2048,
00:18:57.102        "data_size": 63488
00:18:57.102      }
00:18:57.102    ]
00:18:57.102  }'
00:18:57.102   23:52:27	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:57.102   23:52:27	-- common/autotest_common.sh@10 -- # set +x
00:18:57.669    23:52:28	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:57.669    23:52:28	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:18:57.669  [2024-12-13 23:52:28.360117] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:57.669   23:52:28	-- bdev/bdev_raid.sh@430 -- # '[' 0644cde5-6feb-470a-a3a0-db132ed4a99e '!=' 0644cde5-6feb-470a-a3a0-db132ed4a99e ']'
00:18:57.669   23:52:28	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:18:57.669   23:52:28	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:57.669   23:52:28	-- bdev/bdev_raid.sh@196 -- # return 0
00:18:57.669   23:52:28	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:18:57.928  [2024-12-13 23:52:28.620025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:57.928   23:52:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:57.928    23:52:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:57.928    23:52:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:58.187   23:52:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:58.187    "name": "raid_bdev1",
00:18:58.187    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:18:58.187    "strip_size_kb": 0,
00:18:58.187    "state": "online",
00:18:58.187    "raid_level": "raid1",
00:18:58.187    "superblock": true,
00:18:58.187    "num_base_bdevs": 4,
00:18:58.187    "num_base_bdevs_discovered": 3,
00:18:58.187    "num_base_bdevs_operational": 3,
00:18:58.187    "base_bdevs_list": [
00:18:58.187      {
00:18:58.187        "name": null,
00:18:58.187        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:58.187        "is_configured": false,
00:18:58.187        "data_offset": 2048,
00:18:58.187        "data_size": 63488
00:18:58.187      },
00:18:58.187      {
00:18:58.187        "name": "pt2",
00:18:58.187        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:18:58.187        "is_configured": true,
00:18:58.187        "data_offset": 2048,
00:18:58.187        "data_size": 63488
00:18:58.187      },
00:18:58.187      {
00:18:58.187        "name": "pt3",
00:18:58.187        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:18:58.187        "is_configured": true,
00:18:58.187        "data_offset": 2048,
00:18:58.187        "data_size": 63488
00:18:58.187      },
00:18:58.187      {
00:18:58.187        "name": "pt4",
00:18:58.187        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:18:58.187        "is_configured": true,
00:18:58.187        "data_offset": 2048,
00:18:58.187        "data_size": 63488
00:18:58.187      }
00:18:58.187    ]
00:18:58.187  }'
00:18:58.187   23:52:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:58.187   23:52:28	-- common/autotest_common.sh@10 -- # set +x
00:18:58.755   23:52:29	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:59.013  [2024-12-13 23:52:29.692185] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:59.013  [2024-12-13 23:52:29.692210] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:59.013  [2024-12-13 23:52:29.692258] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:59.013  [2024-12-13 23:52:29.692323] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:59.013  [2024-12-13 23:52:29.692334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:18:59.013    23:52:29	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:59.013    23:52:29	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:18:59.272   23:52:29	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:18:59.272   23:52:29	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:18:59.272   23:52:29	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:18:59.272   23:52:29	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:59.272   23:52:29	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:59.531   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:18:59.531   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:59.531   23:52:30	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:59.789   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:18:59.789   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:59.790   23:52:30	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:19:00.074   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:19:00.074   23:52:30	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:19:00.074   23:52:30	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:19:00.074   23:52:30	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:19:00.074   23:52:30	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:19:00.333  [2024-12-13 23:52:30.784324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:19:00.333  [2024-12-13 23:52:30.784390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:00.333  [2024-12-13 23:52:30.784423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:19:00.333  [2024-12-13 23:52:30.784457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:00.333  [2024-12-13 23:52:30.786759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:00.333  [2024-12-13 23:52:30.786823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:19:00.333  [2024-12-13 23:52:30.786917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:19:00.333  [2024-12-13 23:52:30.786963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:19:00.333  pt2
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:00.333   23:52:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:00.333    23:52:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:00.333    23:52:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:00.333   23:52:31	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:00.333    "name": "raid_bdev1",
00:19:00.333    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:00.333    "strip_size_kb": 0,
00:19:00.333    "state": "configuring",
00:19:00.333    "raid_level": "raid1",
00:19:00.333    "superblock": true,
00:19:00.333    "num_base_bdevs": 4,
00:19:00.333    "num_base_bdevs_discovered": 1,
00:19:00.333    "num_base_bdevs_operational": 3,
00:19:00.333    "base_bdevs_list": [
00:19:00.333      {
00:19:00.333        "name": null,
00:19:00.333        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:00.333        "is_configured": false,
00:19:00.333        "data_offset": 2048,
00:19:00.333        "data_size": 63488
00:19:00.333      },
00:19:00.333      {
00:19:00.333        "name": "pt2",
00:19:00.333        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:00.333        "is_configured": true,
00:19:00.333        "data_offset": 2048,
00:19:00.333        "data_size": 63488
00:19:00.333      },
00:19:00.333      {
00:19:00.333        "name": null,
00:19:00.333        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:00.334        "is_configured": false,
00:19:00.334        "data_offset": 2048,
00:19:00.334        "data_size": 63488
00:19:00.334      },
00:19:00.334      {
00:19:00.334        "name": null,
00:19:00.334        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:00.334        "is_configured": false,
00:19:00.334        "data_offset": 2048,
00:19:00.334        "data_size": 63488
00:19:00.334      }
00:19:00.334    ]
00:19:00.334  }'
00:19:00.334   23:52:31	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:00.334   23:52:31	-- common/autotest_common.sh@10 -- # set +x
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:19:01.270  [2024-12-13 23:52:31.840496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:19:01.270  [2024-12-13 23:52:31.840544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:01.270  [2024-12-13 23:52:31.840578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:19:01.270  [2024-12-13 23:52:31.840597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:01.270  [2024-12-13 23:52:31.840956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:01.270  [2024-12-13 23:52:31.840999] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:19:01.270  [2024-12-13 23:52:31.841076] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:19:01.270  [2024-12-13 23:52:31.841095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:19:01.270  pt3
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:01.270   23:52:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:01.270    23:52:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:01.270    23:52:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:01.529   23:52:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:01.529    "name": "raid_bdev1",
00:19:01.529    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:01.529    "strip_size_kb": 0,
00:19:01.529    "state": "configuring",
00:19:01.529    "raid_level": "raid1",
00:19:01.529    "superblock": true,
00:19:01.529    "num_base_bdevs": 4,
00:19:01.529    "num_base_bdevs_discovered": 2,
00:19:01.529    "num_base_bdevs_operational": 3,
00:19:01.529    "base_bdevs_list": [
00:19:01.529      {
00:19:01.529        "name": null,
00:19:01.529        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:01.529        "is_configured": false,
00:19:01.529        "data_offset": 2048,
00:19:01.529        "data_size": 63488
00:19:01.529      },
00:19:01.529      {
00:19:01.529        "name": "pt2",
00:19:01.529        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:01.529        "is_configured": true,
00:19:01.529        "data_offset": 2048,
00:19:01.529        "data_size": 63488
00:19:01.529      },
00:19:01.529      {
00:19:01.529        "name": "pt3",
00:19:01.529        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:01.529        "is_configured": true,
00:19:01.529        "data_offset": 2048,
00:19:01.529        "data_size": 63488
00:19:01.529      },
00:19:01.529      {
00:19:01.529        "name": null,
00:19:01.529        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:01.529        "is_configured": false,
00:19:01.529        "data_offset": 2048,
00:19:01.529        "data_size": 63488
00:19:01.529      }
00:19:01.529    ]
00:19:01.529  }'
00:19:01.529   23:52:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:01.529   23:52:32	-- common/autotest_common.sh@10 -- # set +x
00:19:02.096   23:52:32	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:19:02.096   23:52:32	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:19:02.096   23:52:32	-- bdev/bdev_raid.sh@462 -- # i=3
00:19:02.096   23:52:32	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:19:02.355  [2024-12-13 23:52:32.980689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:19:02.355  [2024-12-13 23:52:32.980751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:02.355  [2024-12-13 23:52:32.980786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:19:02.355  [2024-12-13 23:52:32.980807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:02.355  [2024-12-13 23:52:32.981193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:02.355  [2024-12-13 23:52:32.981231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:19:02.355  [2024-12-13 23:52:32.981312] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:19:02.355  [2024-12-13 23:52:32.981333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:19:02.355  [2024-12-13 23:52:32.981440] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80
00:19:02.355  [2024-12-13 23:52:32.981452] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:02.355  [2024-12-13 23:52:32.981570] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0
00:19:02.355  [2024-12-13 23:52:32.981923] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80
00:19:02.355  [2024-12-13 23:52:32.981943] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80
00:19:02.355  [2024-12-13 23:52:32.982060] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:02.355  pt4
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:02.355   23:52:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:02.355    23:52:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:02.355    23:52:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:02.613   23:52:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:02.613    "name": "raid_bdev1",
00:19:02.613    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:02.613    "strip_size_kb": 0,
00:19:02.613    "state": "online",
00:19:02.613    "raid_level": "raid1",
00:19:02.613    "superblock": true,
00:19:02.613    "num_base_bdevs": 4,
00:19:02.613    "num_base_bdevs_discovered": 3,
00:19:02.613    "num_base_bdevs_operational": 3,
00:19:02.613    "base_bdevs_list": [
00:19:02.613      {
00:19:02.613        "name": null,
00:19:02.613        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:02.613        "is_configured": false,
00:19:02.613        "data_offset": 2048,
00:19:02.613        "data_size": 63488
00:19:02.613      },
00:19:02.613      {
00:19:02.613        "name": "pt2",
00:19:02.614        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:02.614        "is_configured": true,
00:19:02.614        "data_offset": 2048,
00:19:02.614        "data_size": 63488
00:19:02.614      },
00:19:02.614      {
00:19:02.614        "name": "pt3",
00:19:02.614        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:02.614        "is_configured": true,
00:19:02.614        "data_offset": 2048,
00:19:02.614        "data_size": 63488
00:19:02.614      },
00:19:02.614      {
00:19:02.614        "name": "pt4",
00:19:02.614        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:02.614        "is_configured": true,
00:19:02.614        "data_offset": 2048,
00:19:02.614        "data_size": 63488
00:19:02.614      }
00:19:02.614    ]
00:19:02.614  }'
00:19:02.614   23:52:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:02.614   23:52:33	-- common/autotest_common.sh@10 -- # set +x
00:19:03.180   23:52:33	-- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']'
00:19:03.180   23:52:33	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:03.439  [2024-12-13 23:52:34.052845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:03.439  [2024-12-13 23:52:34.052868] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:03.439  [2024-12-13 23:52:34.052914] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:03.439  [2024-12-13 23:52:34.052969] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:03.439  [2024-12-13 23:52:34.052979] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline
00:19:03.439    23:52:34	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:03.439    23:52:34	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:19:03.697   23:52:34	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:19:03.697   23:52:34	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:19:03.697   23:52:34	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:19:03.956  [2024-12-13 23:52:34.560932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:19:03.956  [2024-12-13 23:52:34.560986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:03.956  [2024-12-13 23:52:34.561018] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:19:03.956  [2024-12-13 23:52:34.561040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:03.956  [2024-12-13 23:52:34.563167] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:03.956  [2024-12-13 23:52:34.563229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:19:03.956  [2024-12-13 23:52:34.563311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:19:03.956  [2024-12-13 23:52:34.563352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:19:03.956  pt1
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:03.956   23:52:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:03.956    23:52:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:03.956    23:52:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:04.215   23:52:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:04.215    "name": "raid_bdev1",
00:19:04.215    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:04.215    "strip_size_kb": 0,
00:19:04.215    "state": "configuring",
00:19:04.215    "raid_level": "raid1",
00:19:04.215    "superblock": true,
00:19:04.215    "num_base_bdevs": 4,
00:19:04.215    "num_base_bdevs_discovered": 1,
00:19:04.215    "num_base_bdevs_operational": 4,
00:19:04.215    "base_bdevs_list": [
00:19:04.215      {
00:19:04.215        "name": "pt1",
00:19:04.215        "uuid": "dede03aa-cae5-5009-8a6d-2d8c57ccbc67",
00:19:04.215        "is_configured": true,
00:19:04.215        "data_offset": 2048,
00:19:04.215        "data_size": 63488
00:19:04.215      },
00:19:04.215      {
00:19:04.215        "name": null,
00:19:04.215        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:04.215        "is_configured": false,
00:19:04.215        "data_offset": 2048,
00:19:04.215        "data_size": 63488
00:19:04.215      },
00:19:04.215      {
00:19:04.215        "name": null,
00:19:04.215        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:04.215        "is_configured": false,
00:19:04.215        "data_offset": 2048,
00:19:04.215        "data_size": 63488
00:19:04.215      },
00:19:04.215      {
00:19:04.215        "name": null,
00:19:04.215        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:04.215        "is_configured": false,
00:19:04.215        "data_offset": 2048,
00:19:04.215        "data_size": 63488
00:19:04.215      }
00:19:04.215    ]
00:19:04.215  }'
00:19:04.215   23:52:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:04.215   23:52:34	-- common/autotest_common.sh@10 -- # set +x
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:19:04.782   23:52:35	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:19:05.040   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:19:05.040   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:19:05.040   23:52:35	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:19:05.299   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:19:05.299   23:52:35	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:19:05.299   23:52:35	-- bdev/bdev_raid.sh@489 -- # i=3
00:19:05.299   23:52:35	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:19:05.558  [2024-12-13 23:52:36.081215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:19:05.558  [2024-12-13 23:52:36.081273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:05.558  [2024-12-13 23:52:36.081298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80
00:19:05.558  [2024-12-13 23:52:36.081320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:05.558  [2024-12-13 23:52:36.081666] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:05.558  [2024-12-13 23:52:36.081712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:19:05.558  [2024-12-13 23:52:36.081792] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:19:05.558  [2024-12-13 23:52:36.081805] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2)
00:19:05.558  [2024-12-13 23:52:36.081811] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:05.558  [2024-12-13 23:52:36.081832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring
00:19:05.558  [2024-12-13 23:52:36.081891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:19:05.558  pt4
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:05.558   23:52:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:05.558    23:52:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:05.558    23:52:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:05.817   23:52:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:05.817    "name": "raid_bdev1",
00:19:05.817    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:05.817    "strip_size_kb": 0,
00:19:05.817    "state": "configuring",
00:19:05.817    "raid_level": "raid1",
00:19:05.817    "superblock": true,
00:19:05.817    "num_base_bdevs": 4,
00:19:05.817    "num_base_bdevs_discovered": 1,
00:19:05.817    "num_base_bdevs_operational": 3,
00:19:05.817    "base_bdevs_list": [
00:19:05.817      {
00:19:05.817        "name": null,
00:19:05.817        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:05.817        "is_configured": false,
00:19:05.817        "data_offset": 2048,
00:19:05.817        "data_size": 63488
00:19:05.817      },
00:19:05.817      {
00:19:05.817        "name": null,
00:19:05.817        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:05.817        "is_configured": false,
00:19:05.817        "data_offset": 2048,
00:19:05.817        "data_size": 63488
00:19:05.817      },
00:19:05.817      {
00:19:05.817        "name": null,
00:19:05.817        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:05.817        "is_configured": false,
00:19:05.817        "data_offset": 2048,
00:19:05.817        "data_size": 63488
00:19:05.817      },
00:19:05.817      {
00:19:05.817        "name": "pt4",
00:19:05.817        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:05.817        "is_configured": true,
00:19:05.817        "data_offset": 2048,
00:19:05.817        "data_size": 63488
00:19:05.817      }
00:19:05.817    ]
00:19:05.817  }'
00:19:05.817   23:52:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:05.817   23:52:36	-- common/autotest_common.sh@10 -- # set +x
00:19:06.384   23:52:36	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:19:06.384   23:52:36	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:19:06.384   23:52:36	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:19:06.642  [2024-12-13 23:52:37.177395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:19:06.642  [2024-12-13 23:52:37.177476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:06.642  [2024-12-13 23:52:37.177520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280
00:19:06.642  [2024-12-13 23:52:37.177544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:06.642  [2024-12-13 23:52:37.177984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:06.642  [2024-12-13 23:52:37.178051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:19:06.642  [2024-12-13 23:52:37.178147] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:19:06.642  [2024-12-13 23:52:37.178184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:19:06.642  pt2
00:19:06.642   23:52:37	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:19:06.642   23:52:37	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:19:06.642   23:52:37	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:19:06.901  [2024-12-13 23:52:37.421452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:19:06.901  [2024-12-13 23:52:37.421512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:06.901  [2024-12-13 23:52:37.421539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580
00:19:06.901  [2024-12-13 23:52:37.421564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:06.901  [2024-12-13 23:52:37.421994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:06.901  [2024-12-13 23:52:37.422056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:19:06.901  [2024-12-13 23:52:37.422139] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:19:06.901  [2024-12-13 23:52:37.422160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:19:06.901  [2024-12-13 23:52:37.422276] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80
00:19:06.901  [2024-12-13 23:52:37.422287] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:06.901  [2024-12-13 23:52:37.422387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700
00:19:06.901  [2024-12-13 23:52:37.422722] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80
00:19:06.901  [2024-12-13 23:52:37.422743] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80
00:19:06.901  [2024-12-13 23:52:37.422859] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:06.901  pt3
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:06.901    23:52:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:06.901    23:52:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:06.901    "name": "raid_bdev1",
00:19:06.901    "uuid": "0644cde5-6feb-470a-a3a0-db132ed4a99e",
00:19:06.901    "strip_size_kb": 0,
00:19:06.901    "state": "online",
00:19:06.901    "raid_level": "raid1",
00:19:06.901    "superblock": true,
00:19:06.901    "num_base_bdevs": 4,
00:19:06.901    "num_base_bdevs_discovered": 3,
00:19:06.901    "num_base_bdevs_operational": 3,
00:19:06.901    "base_bdevs_list": [
00:19:06.901      {
00:19:06.901        "name": null,
00:19:06.901        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:06.901        "is_configured": false,
00:19:06.901        "data_offset": 2048,
00:19:06.901        "data_size": 63488
00:19:06.901      },
00:19:06.901      {
00:19:06.901        "name": "pt2",
00:19:06.901        "uuid": "8f437140-3635-5b54-a906-10b64eff549a",
00:19:06.901        "is_configured": true,
00:19:06.901        "data_offset": 2048,
00:19:06.901        "data_size": 63488
00:19:06.901      },
00:19:06.901      {
00:19:06.901        "name": "pt3",
00:19:06.901        "uuid": "56e18ec5-ce1a-585e-a254-41c9b8d434b4",
00:19:06.901        "is_configured": true,
00:19:06.901        "data_offset": 2048,
00:19:06.901        "data_size": 63488
00:19:06.901      },
00:19:06.901      {
00:19:06.901        "name": "pt4",
00:19:06.901        "uuid": "50cb564a-5e7c-5162-bf3d-148256649f10",
00:19:06.901        "is_configured": true,
00:19:06.901        "data_offset": 2048,
00:19:06.901        "data_size": 63488
00:19:06.901      }
00:19:06.901    ]
00:19:06.901  }'
00:19:06.901   23:52:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:06.901   23:52:37	-- common/autotest_common.sh@10 -- # set +x
00:19:07.468    23:52:38	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:07.468    23:52:38	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:19:07.727  [2024-12-13 23:52:38.414266] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:07.727   23:52:38	-- bdev/bdev_raid.sh@506 -- # '[' 0644cde5-6feb-470a-a3a0-db132ed4a99e '!=' 0644cde5-6feb-470a-a3a0-db132ed4a99e ']'
00:19:07.727   23:52:38	-- bdev/bdev_raid.sh@511 -- # killprocess 121631
00:19:07.727   23:52:38	-- common/autotest_common.sh@936 -- # '[' -z 121631 ']'
00:19:07.727   23:52:38	-- common/autotest_common.sh@940 -- # kill -0 121631
00:19:07.727    23:52:38	-- common/autotest_common.sh@941 -- # uname
00:19:07.727   23:52:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:07.727    23:52:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121631
00:19:07.727  killing process with pid 121631
00:19:07.727   23:52:38	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:07.727   23:52:38	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:07.727   23:52:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 121631'
00:19:07.727   23:52:38	-- common/autotest_common.sh@955 -- # kill 121631
00:19:07.727  [2024-12-13 23:52:38.454367] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:07.727   23:52:38	-- common/autotest_common.sh@960 -- # wait 121631
00:19:07.727  [2024-12-13 23:52:38.454437] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:07.727  [2024-12-13 23:52:38.454558] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:07.727  [2024-12-13 23:52:38.454575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline
00:19:07.986  [2024-12-13 23:52:38.709662] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:08.923  ************************************
00:19:08.923  END TEST raid_superblock_test
00:19:08.923  ************************************
00:19:08.923   23:52:39	-- bdev/bdev_raid.sh@513 -- # return 0
00:19:08.923  
00:19:08.923  real	0m21.088s
00:19:08.923  user	0m38.802s
00:19:08.923  sys	0m2.451s
00:19:08.923   23:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:08.923   23:52:39	-- common/autotest_common.sh@10 -- # set +x
00:19:09.181   23:52:39	-- bdev/bdev_raid.sh@733 -- # '[' true = true ']'
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@734 -- # for n in 2 4
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false
00:19:09.182   23:52:39	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:19:09.182   23:52:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:09.182   23:52:39	-- common/autotest_common.sh@10 -- # set +x
00:19:09.182  ************************************
00:19:09.182  START TEST raid_rebuild_test
00:19:09.182  ************************************
00:19:09.182   23:52:39	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:09.182    23:52:39	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@544 -- # raid_pid=122299
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@545 -- # waitforlisten 122299 /var/tmp/spdk-raid.sock
00:19:09.182   23:52:39	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:19:09.182   23:52:39	-- common/autotest_common.sh@829 -- # '[' -z 122299 ']'
00:19:09.182   23:52:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:19:09.182   23:52:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:09.182  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:19:09.182   23:52:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:19:09.182   23:52:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:09.182   23:52:39	-- common/autotest_common.sh@10 -- # set +x
00:19:09.182  [2024-12-13 23:52:39.775061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:09.182  [2024-12-13 23:52:39.775258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122299 ]
00:19:09.182  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:09.182  Zero copy mechanism will not be used.
00:19:09.440  [2024-12-13 23:52:39.950425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:09.699  [2024-12-13 23:52:40.197927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:09.699  [2024-12-13 23:52:40.382858] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:19:10.266   23:52:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:10.266   23:52:40	-- common/autotest_common.sh@862 -- # return 0
00:19:10.266   23:52:40	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:10.266   23:52:40	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:10.266   23:52:40	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:19:10.266  BaseBdev1
00:19:10.266   23:52:40	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:10.266   23:52:40	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:10.267   23:52:40	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:19:10.544  BaseBdev2
00:19:10.544   23:52:41	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:19:10.814  spare_malloc
00:19:10.814   23:52:41	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:19:11.073  spare_delay
00:19:11.073   23:52:41	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:11.331  [2024-12-13 23:52:41.836584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:11.331  [2024-12-13 23:52:41.836660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:11.331  [2024-12-13 23:52:41.836709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80
00:19:11.331  [2024-12-13 23:52:41.836754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:11.331  [2024-12-13 23:52:41.839225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:11.331  [2024-12-13 23:52:41.839290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:11.331  spare
00:19:11.331   23:52:41	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:19:11.590  [2024-12-13 23:52:42.080664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:11.590  [2024-12-13 23:52:42.082509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:11.590  [2024-12-13 23:52:42.082589] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180
00:19:11.590  [2024-12-13 23:52:42.082601] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:19:11.590  [2024-12-13 23:52:42.082714] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:19:11.590  [2024-12-13 23:52:42.083091] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180
00:19:11.590  [2024-12-13 23:52:42.083115] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180
00:19:11.590  [2024-12-13 23:52:42.083264] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:11.590    23:52:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:11.590    23:52:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:11.590    "name": "raid_bdev1",
00:19:11.590    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:11.590    "strip_size_kb": 0,
00:19:11.590    "state": "online",
00:19:11.590    "raid_level": "raid1",
00:19:11.590    "superblock": false,
00:19:11.590    "num_base_bdevs": 2,
00:19:11.590    "num_base_bdevs_discovered": 2,
00:19:11.590    "num_base_bdevs_operational": 2,
00:19:11.590    "base_bdevs_list": [
00:19:11.590      {
00:19:11.590        "name": "BaseBdev1",
00:19:11.590        "uuid": "3dbc79a3-6942-40de-83ed-7439fda2d723",
00:19:11.590        "is_configured": true,
00:19:11.590        "data_offset": 0,
00:19:11.590        "data_size": 65536
00:19:11.590      },
00:19:11.590      {
00:19:11.590        "name": "BaseBdev2",
00:19:11.590        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:11.590        "is_configured": true,
00:19:11.590        "data_offset": 0,
00:19:11.590        "data_size": 65536
00:19:11.590      }
00:19:11.590    ]
00:19:11.590  }'
00:19:11.590   23:52:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:11.590   23:52:42	-- common/autotest_common.sh@10 -- # set +x
00:19:12.157    23:52:42	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:12.157    23:52:42	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:19:12.415  [2024-12-13 23:52:43.096973] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:12.415   23:52:43	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:19:12.415    23:52:43	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:19:12.415    23:52:43	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:12.674   23:52:43	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:19:12.674   23:52:43	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:19:12.674   23:52:43	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:19:12.674   23:52:43	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@12 -- # local i
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:12.674   23:52:43	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:19:12.933  [2024-12-13 23:52:43.572918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:19:12.933  /dev/nbd0
00:19:12.933    23:52:43	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:12.933   23:52:43	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:12.933   23:52:43	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:12.933   23:52:43	-- common/autotest_common.sh@867 -- # local i
00:19:12.933   23:52:43	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:12.933   23:52:43	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:12.933   23:52:43	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:12.933   23:52:43	-- common/autotest_common.sh@871 -- # break
00:19:12.933   23:52:43	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:12.933   23:52:43	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:12.933   23:52:43	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:12.933  1+0 records in
00:19:12.933  1+0 records out
00:19:12.933  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335147 s, 12.2 MB/s
00:19:12.933    23:52:43	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.933   23:52:43	-- common/autotest_common.sh@884 -- # size=4096
00:19:12.933   23:52:43	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.933   23:52:43	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:12.933   23:52:43	-- common/autotest_common.sh@887 -- # return 0
00:19:12.933   23:52:43	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:12.933   23:52:43	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:12.933   23:52:43	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:19:12.933   23:52:43	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:19:12.933   23:52:43	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:19:18.203  65536+0 records in
00:19:18.203  65536+0 records out
00:19:18.203  33554432 bytes (34 MB, 32 MiB) copied, 4.57128 s, 7.3 MB/s
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@51 -- # local i
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:18.203  [2024-12-13 23:52:48.440406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:18.203    23:52:48	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@41 -- # break
00:19:18.203   23:52:48	-- bdev/nbd_common.sh@45 -- # return 0
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:18.203  [2024-12-13 23:52:48.680025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:18.203    23:52:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:18.203    23:52:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:18.203    "name": "raid_bdev1",
00:19:18.203    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:18.203    "strip_size_kb": 0,
00:19:18.203    "state": "online",
00:19:18.203    "raid_level": "raid1",
00:19:18.203    "superblock": false,
00:19:18.203    "num_base_bdevs": 2,
00:19:18.203    "num_base_bdevs_discovered": 1,
00:19:18.203    "num_base_bdevs_operational": 1,
00:19:18.203    "base_bdevs_list": [
00:19:18.203      {
00:19:18.203        "name": null,
00:19:18.203        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:18.203        "is_configured": false,
00:19:18.203        "data_offset": 0,
00:19:18.203        "data_size": 65536
00:19:18.203      },
00:19:18.203      {
00:19:18.203        "name": "BaseBdev2",
00:19:18.203        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:18.203        "is_configured": true,
00:19:18.203        "data_offset": 0,
00:19:18.203        "data_size": 65536
00:19:18.203      }
00:19:18.203    ]
00:19:18.203  }'
00:19:18.203   23:52:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:18.203   23:52:48	-- common/autotest_common.sh@10 -- # set +x
00:19:18.771   23:52:49	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:19.031  [2024-12-13 23:52:49.736178] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:19.031  [2024-12-13 23:52:49.736219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:19.031  [2024-12-13 23:52:49.749040] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550
00:19:19.031  [2024-12-13 23:52:49.751016] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:19.031   23:52:49	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:19:20.408   23:52:50	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:20.408   23:52:50	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:20.408   23:52:50	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:20.408   23:52:50	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:20.408   23:52:50	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:20.408    23:52:50	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:20.408    23:52:50	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:20.408   23:52:51	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:20.408    "name": "raid_bdev1",
00:19:20.408    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:20.408    "strip_size_kb": 0,
00:19:20.408    "state": "online",
00:19:20.408    "raid_level": "raid1",
00:19:20.408    "superblock": false,
00:19:20.408    "num_base_bdevs": 2,
00:19:20.408    "num_base_bdevs_discovered": 2,
00:19:20.408    "num_base_bdevs_operational": 2,
00:19:20.408    "process": {
00:19:20.408      "type": "rebuild",
00:19:20.408      "target": "spare",
00:19:20.408      "progress": {
00:19:20.408        "blocks": 24576,
00:19:20.408        "percent": 37
00:19:20.409      }
00:19:20.409    },
00:19:20.409    "base_bdevs_list": [
00:19:20.409      {
00:19:20.409        "name": "spare",
00:19:20.409        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:20.409        "is_configured": true,
00:19:20.409        "data_offset": 0,
00:19:20.409        "data_size": 65536
00:19:20.409      },
00:19:20.409      {
00:19:20.409        "name": "BaseBdev2",
00:19:20.409        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:20.409        "is_configured": true,
00:19:20.409        "data_offset": 0,
00:19:20.409        "data_size": 65536
00:19:20.409      }
00:19:20.409    ]
00:19:20.409  }'
00:19:20.409    23:52:51	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:20.409   23:52:51	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:20.409    23:52:51	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:20.409   23:52:51	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:20.409   23:52:51	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:19:20.668  [2024-12-13 23:52:51.339207] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:19:20.668  [2024-12-13 23:52:51.361356] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:19:20.668  [2024-12-13 23:52:51.361435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:20.668   23:52:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:20.668    23:52:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:20.668    23:52:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:20.927   23:52:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:20.927    "name": "raid_bdev1",
00:19:20.927    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:20.927    "strip_size_kb": 0,
00:19:20.927    "state": "online",
00:19:20.927    "raid_level": "raid1",
00:19:20.927    "superblock": false,
00:19:20.927    "num_base_bdevs": 2,
00:19:20.927    "num_base_bdevs_discovered": 1,
00:19:20.927    "num_base_bdevs_operational": 1,
00:19:20.927    "base_bdevs_list": [
00:19:20.927      {
00:19:20.927        "name": null,
00:19:20.927        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:20.927        "is_configured": false,
00:19:20.927        "data_offset": 0,
00:19:20.927        "data_size": 65536
00:19:20.927      },
00:19:20.927      {
00:19:20.927        "name": "BaseBdev2",
00:19:20.927        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:20.927        "is_configured": true,
00:19:20.927        "data_offset": 0,
00:19:20.927        "data_size": 65536
00:19:20.927      }
00:19:20.927    ]
00:19:20.927  }'
00:19:20.927   23:52:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:20.927   23:52:51	-- common/autotest_common.sh@10 -- # set +x
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:21.864    23:52:52	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:21.864    23:52:52	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:21.864    "name": "raid_bdev1",
00:19:21.864    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:21.864    "strip_size_kb": 0,
00:19:21.864    "state": "online",
00:19:21.864    "raid_level": "raid1",
00:19:21.864    "superblock": false,
00:19:21.864    "num_base_bdevs": 2,
00:19:21.864    "num_base_bdevs_discovered": 1,
00:19:21.864    "num_base_bdevs_operational": 1,
00:19:21.864    "base_bdevs_list": [
00:19:21.864      {
00:19:21.864        "name": null,
00:19:21.864        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:21.864        "is_configured": false,
00:19:21.864        "data_offset": 0,
00:19:21.864        "data_size": 65536
00:19:21.864      },
00:19:21.864      {
00:19:21.864        "name": "BaseBdev2",
00:19:21.864        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:21.864        "is_configured": true,
00:19:21.864        "data_offset": 0,
00:19:21.864        "data_size": 65536
00:19:21.864      }
00:19:21.864    ]
00:19:21.864  }'
00:19:21.864    23:52:52	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:21.864   23:52:52	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:21.864    23:52:52	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:22.123   23:52:52	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:22.123   23:52:52	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:22.123  [2024-12-13 23:52:52.847247] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:22.123  [2024-12-13 23:52:52.847287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:22.382  [2024-12-13 23:52:52.859462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0
00:19:22.382  [2024-12-13 23:52:52.861366] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:22.382   23:52:52	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:19:23.318   23:52:53	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:23.318   23:52:53	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:23.318   23:52:53	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:23.318   23:52:53	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:23.318   23:52:53	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:23.318    23:52:53	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:23.318    23:52:53	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:23.577    "name": "raid_bdev1",
00:19:23.577    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:23.577    "strip_size_kb": 0,
00:19:23.577    "state": "online",
00:19:23.577    "raid_level": "raid1",
00:19:23.577    "superblock": false,
00:19:23.577    "num_base_bdevs": 2,
00:19:23.577    "num_base_bdevs_discovered": 2,
00:19:23.577    "num_base_bdevs_operational": 2,
00:19:23.577    "process": {
00:19:23.577      "type": "rebuild",
00:19:23.577      "target": "spare",
00:19:23.577      "progress": {
00:19:23.577        "blocks": 24576,
00:19:23.577        "percent": 37
00:19:23.577      }
00:19:23.577    },
00:19:23.577    "base_bdevs_list": [
00:19:23.577      {
00:19:23.577        "name": "spare",
00:19:23.577        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:23.577        "is_configured": true,
00:19:23.577        "data_offset": 0,
00:19:23.577        "data_size": 65536
00:19:23.577      },
00:19:23.577      {
00:19:23.577        "name": "BaseBdev2",
00:19:23.577        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:23.577        "is_configured": true,
00:19:23.577        "data_offset": 0,
00:19:23.577        "data_size": 65536
00:19:23.577      }
00:19:23.577    ]
00:19:23.577  }'
00:19:23.577    23:52:54	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:23.577    23:52:54	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@657 -- # local timeout=385
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:23.577   23:52:54	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:23.578   23:52:54	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:23.578   23:52:54	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:23.578   23:52:54	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:23.578   23:52:54	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:23.578    23:52:54	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:23.578    23:52:54	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:23.837   23:52:54	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:23.837    "name": "raid_bdev1",
00:19:23.837    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:23.837    "strip_size_kb": 0,
00:19:23.837    "state": "online",
00:19:23.837    "raid_level": "raid1",
00:19:23.837    "superblock": false,
00:19:23.837    "num_base_bdevs": 2,
00:19:23.837    "num_base_bdevs_discovered": 2,
00:19:23.837    "num_base_bdevs_operational": 2,
00:19:23.837    "process": {
00:19:23.837      "type": "rebuild",
00:19:23.837      "target": "spare",
00:19:23.837      "progress": {
00:19:23.837        "blocks": 30720,
00:19:23.837        "percent": 46
00:19:23.837      }
00:19:23.837    },
00:19:23.837    "base_bdevs_list": [
00:19:23.837      {
00:19:23.837        "name": "spare",
00:19:23.837        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:23.837        "is_configured": true,
00:19:23.837        "data_offset": 0,
00:19:23.837        "data_size": 65536
00:19:23.837      },
00:19:23.837      {
00:19:23.837        "name": "BaseBdev2",
00:19:23.837        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:23.837        "is_configured": true,
00:19:23.837        "data_offset": 0,
00:19:23.837        "data_size": 65536
00:19:23.837      }
00:19:23.837    ]
00:19:23.837  }'
00:19:23.837    23:52:54	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:23.837   23:52:54	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:23.837    23:52:54	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:23.837   23:52:54	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:23.837   23:52:54	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:25.211    23:52:55	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:25.211    23:52:55	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:25.211    "name": "raid_bdev1",
00:19:25.211    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:25.211    "strip_size_kb": 0,
00:19:25.211    "state": "online",
00:19:25.211    "raid_level": "raid1",
00:19:25.211    "superblock": false,
00:19:25.211    "num_base_bdevs": 2,
00:19:25.211    "num_base_bdevs_discovered": 2,
00:19:25.211    "num_base_bdevs_operational": 2,
00:19:25.211    "process": {
00:19:25.211      "type": "rebuild",
00:19:25.211      "target": "spare",
00:19:25.211      "progress": {
00:19:25.211        "blocks": 57344,
00:19:25.211        "percent": 87
00:19:25.211      }
00:19:25.211    },
00:19:25.211    "base_bdevs_list": [
00:19:25.211      {
00:19:25.211        "name": "spare",
00:19:25.211        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:25.211        "is_configured": true,
00:19:25.211        "data_offset": 0,
00:19:25.211        "data_size": 65536
00:19:25.211      },
00:19:25.211      {
00:19:25.211        "name": "BaseBdev2",
00:19:25.211        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:25.211        "is_configured": true,
00:19:25.211        "data_offset": 0,
00:19:25.211        "data_size": 65536
00:19:25.211      }
00:19:25.211    ]
00:19:25.211  }'
00:19:25.211    23:52:55	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:25.211    23:52:55	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:25.211   23:52:55	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:25.470  [2024-12-13 23:52:56.078625] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:19:25.470  [2024-12-13 23:52:56.078700] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:19:25.470  [2024-12-13 23:52:56.078771] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:26.406   23:52:56	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:26.406    23:52:56	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:26.406    23:52:56	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:26.406   23:52:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:26.406    "name": "raid_bdev1",
00:19:26.406    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:26.406    "strip_size_kb": 0,
00:19:26.406    "state": "online",
00:19:26.406    "raid_level": "raid1",
00:19:26.406    "superblock": false,
00:19:26.406    "num_base_bdevs": 2,
00:19:26.406    "num_base_bdevs_discovered": 2,
00:19:26.406    "num_base_bdevs_operational": 2,
00:19:26.406    "base_bdevs_list": [
00:19:26.406      {
00:19:26.406        "name": "spare",
00:19:26.406        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:26.406        "is_configured": true,
00:19:26.406        "data_offset": 0,
00:19:26.406        "data_size": 65536
00:19:26.406      },
00:19:26.406      {
00:19:26.406        "name": "BaseBdev2",
00:19:26.406        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:26.406        "is_configured": true,
00:19:26.406        "data_offset": 0,
00:19:26.406        "data_size": 65536
00:19:26.406      }
00:19:26.406    ]
00:19:26.406  }'
00:19:26.406    23:52:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:19:26.665    23:52:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@660 -- # break
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:26.665   23:52:57	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:26.665    23:52:57	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:26.665    23:52:57	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:26.923    "name": "raid_bdev1",
00:19:26.923    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:26.923    "strip_size_kb": 0,
00:19:26.923    "state": "online",
00:19:26.923    "raid_level": "raid1",
00:19:26.923    "superblock": false,
00:19:26.923    "num_base_bdevs": 2,
00:19:26.923    "num_base_bdevs_discovered": 2,
00:19:26.923    "num_base_bdevs_operational": 2,
00:19:26.923    "base_bdevs_list": [
00:19:26.923      {
00:19:26.923        "name": "spare",
00:19:26.923        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:26.923        "is_configured": true,
00:19:26.923        "data_offset": 0,
00:19:26.923        "data_size": 65536
00:19:26.923      },
00:19:26.923      {
00:19:26.923        "name": "BaseBdev2",
00:19:26.923        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:26.923        "is_configured": true,
00:19:26.923        "data_offset": 0,
00:19:26.923        "data_size": 65536
00:19:26.923      }
00:19:26.923    ]
00:19:26.923  }'
00:19:26.923    23:52:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:26.923    23:52:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:26.923   23:52:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:26.923    23:52:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:26.923    23:52:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:27.182   23:52:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:27.182    "name": "raid_bdev1",
00:19:27.182    "uuid": "e04e0031-adc6-4c29-a5ed-2f2054d5818c",
00:19:27.182    "strip_size_kb": 0,
00:19:27.182    "state": "online",
00:19:27.182    "raid_level": "raid1",
00:19:27.182    "superblock": false,
00:19:27.182    "num_base_bdevs": 2,
00:19:27.182    "num_base_bdevs_discovered": 2,
00:19:27.182    "num_base_bdevs_operational": 2,
00:19:27.182    "base_bdevs_list": [
00:19:27.182      {
00:19:27.182        "name": "spare",
00:19:27.182        "uuid": "f78de215-e21a-5d1a-911f-daae8b4cce91",
00:19:27.182        "is_configured": true,
00:19:27.182        "data_offset": 0,
00:19:27.182        "data_size": 65536
00:19:27.182      },
00:19:27.182      {
00:19:27.182        "name": "BaseBdev2",
00:19:27.182        "uuid": "9debee7f-473d-4325-b084-9efc6d753559",
00:19:27.182        "is_configured": true,
00:19:27.182        "data_offset": 0,
00:19:27.182        "data_size": 65536
00:19:27.182      }
00:19:27.182    ]
00:19:27.182  }'
00:19:27.182   23:52:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:27.182   23:52:57	-- common/autotest_common.sh@10 -- # set +x
00:19:27.748   23:52:58	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:28.006  [2024-12-13 23:52:58.595137] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:28.006  [2024-12-13 23:52:58.595166] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:28.006  [2024-12-13 23:52:58.595264] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:28.006  [2024-12-13 23:52:58.595343] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:28.006  [2024-12-13 23:52:58.595357] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline
00:19:28.006    23:52:58	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:28.006    23:52:58	-- bdev/bdev_raid.sh@671 -- # jq length
00:19:28.265   23:52:58	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:19:28.265   23:52:58	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:19:28.265   23:52:58	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@12 -- # local i
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:28.265   23:52:58	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:19:28.524  /dev/nbd0
00:19:28.524    23:52:59	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:28.524   23:52:59	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:28.524   23:52:59	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:28.524   23:52:59	-- common/autotest_common.sh@867 -- # local i
00:19:28.524   23:52:59	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:28.524   23:52:59	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:28.524   23:52:59	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:28.524   23:52:59	-- common/autotest_common.sh@871 -- # break
00:19:28.524   23:52:59	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:28.524   23:52:59	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:28.524   23:52:59	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:28.524  1+0 records in
00:19:28.524  1+0 records out
00:19:28.524  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507793 s, 8.1 MB/s
00:19:28.524    23:52:59	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:28.524   23:52:59	-- common/autotest_common.sh@884 -- # size=4096
00:19:28.524   23:52:59	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:28.524   23:52:59	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:28.524   23:52:59	-- common/autotest_common.sh@887 -- # return 0
00:19:28.524   23:52:59	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:28.524   23:52:59	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:28.524   23:52:59	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:19:28.783  /dev/nbd1
00:19:28.783    23:52:59	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:28.783   23:52:59	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:28.783   23:52:59	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:19:28.783   23:52:59	-- common/autotest_common.sh@867 -- # local i
00:19:28.783   23:52:59	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:28.783   23:52:59	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:28.783   23:52:59	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:19:28.783   23:52:59	-- common/autotest_common.sh@871 -- # break
00:19:28.783   23:52:59	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:28.783   23:52:59	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:28.783   23:52:59	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:28.783  1+0 records in
00:19:28.783  1+0 records out
00:19:28.783  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429652 s, 9.5 MB/s
00:19:28.783    23:52:59	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:28.783   23:52:59	-- common/autotest_common.sh@884 -- # size=4096
00:19:28.783   23:52:59	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:28.783   23:52:59	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:28.783   23:52:59	-- common/autotest_common.sh@887 -- # return 0
00:19:28.784   23:52:59	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:28.784   23:52:59	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:28.784   23:52:59	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:19:29.042   23:52:59	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:19:29.042   23:52:59	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@51 -- # local i
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:29.043    23:52:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@41 -- # break
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@45 -- # return 0
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:29.043   23:52:59	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:19:29.301    23:52:59	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@41 -- # break
00:19:29.301   23:52:59	-- bdev/nbd_common.sh@45 -- # return 0
00:19:29.301   23:52:59	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:19:29.301   23:52:59	-- bdev/bdev_raid.sh@709 -- # killprocess 122299
00:19:29.301   23:52:59	-- common/autotest_common.sh@936 -- # '[' -z 122299 ']'
00:19:29.301   23:52:59	-- common/autotest_common.sh@940 -- # kill -0 122299
00:19:29.301    23:52:59	-- common/autotest_common.sh@941 -- # uname
00:19:29.301   23:52:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:29.301    23:52:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122299
00:19:29.301   23:52:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:29.301   23:52:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:29.301   23:52:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122299'
00:19:29.301  killing process with pid 122299
00:19:29.301   23:52:59	-- common/autotest_common.sh@955 -- # kill 122299
00:19:29.301  Received shutdown signal, test time was about 60.000000 seconds
00:19:29.301  
00:19:29.301                                                                                                  Latency(us)
00:19:29.301  
[2024-12-13T23:53:00.033Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:29.301  
[2024-12-13T23:53:00.033Z]  ===================================================================================================================
00:19:29.301  
[2024-12-13T23:53:00.033Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:19:29.301  [2024-12-13 23:52:59.974897] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:29.301   23:52:59	-- common/autotest_common.sh@960 -- # wait 122299
00:19:29.560  [2024-12-13 23:53:00.171273] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:30.496   23:53:01	-- bdev/bdev_raid.sh@711 -- # return 0
00:19:30.496  
00:19:30.496  real	0m21.497s
00:19:30.496  user	0m29.460s
00:19:30.496  sys	0m3.597s
00:19:30.496   23:53:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:30.496  ************************************
00:19:30.496  END TEST raid_rebuild_test
00:19:30.496  ************************************
00:19:30.496   23:53:01	-- common/autotest_common.sh@10 -- # set +x
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false
00:19:30.755   23:53:01	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:19:30.755   23:53:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:30.755   23:53:01	-- common/autotest_common.sh@10 -- # set +x
00:19:30.755  ************************************
00:19:30.755  START TEST raid_rebuild_test_sb
00:19:30.755  ************************************
00:19:30.755   23:53:01	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:30.755    23:53:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@544 -- # raid_pid=122839
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:19:30.755   23:53:01	-- bdev/bdev_raid.sh@545 -- # waitforlisten 122839 /var/tmp/spdk-raid.sock
00:19:30.755   23:53:01	-- common/autotest_common.sh@829 -- # '[' -z 122839 ']'
00:19:30.755   23:53:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:19:30.755   23:53:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:30.755  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:19:30.755   23:53:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:19:30.755   23:53:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:30.755   23:53:01	-- common/autotest_common.sh@10 -- # set +x
00:19:30.755  [2024-12-13 23:53:01.328907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:30.755  [2024-12-13 23:53:01.329094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122839 ]
00:19:30.755  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:30.755  Zero copy mechanism will not be used.
00:19:31.014  [2024-12-13 23:53:01.498023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:31.014  [2024-12-13 23:53:01.677006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:31.273  [2024-12-13 23:53:01.862424] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:19:31.841   23:53:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:31.841   23:53:02	-- common/autotest_common.sh@862 -- # return 0
00:19:31.841   23:53:02	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:31.841   23:53:02	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:19:31.841   23:53:02	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:19:31.841  BaseBdev1_malloc
00:19:31.841   23:53:02	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:19:32.100  [2024-12-13 23:53:02.752931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:19:32.100  [2024-12-13 23:53:02.753152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:32.100  [2024-12-13 23:53:02.753229] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:19:32.100  [2024-12-13 23:53:02.753386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:32.100  [2024-12-13 23:53:02.755729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:32.100  [2024-12-13 23:53:02.755910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:19:32.100  BaseBdev1
00:19:32.100   23:53:02	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:32.100   23:53:02	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:19:32.100   23:53:02	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:19:32.358  BaseBdev2_malloc
00:19:32.358   23:53:03	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:19:32.617  [2024-12-13 23:53:03.221594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:19:32.617  [2024-12-13 23:53:03.221775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:32.617  [2024-12-13 23:53:03.221855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:19:32.617  [2024-12-13 23:53:03.222028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:32.617  [2024-12-13 23:53:03.224259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:32.617  [2024-12-13 23:53:03.224425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:19:32.617  BaseBdev2
00:19:32.617   23:53:03	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:19:32.876  spare_malloc
00:19:32.876   23:53:03	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:19:33.135  spare_delay
00:19:33.135   23:53:03	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:33.394  [2024-12-13 23:53:03.870456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:33.394  [2024-12-13 23:53:03.870652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:33.394  [2024-12-13 23:53:03.870733] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:19:33.394  [2024-12-13 23:53:03.870888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:33.394  [2024-12-13 23:53:03.873175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:33.394  [2024-12-13 23:53:03.873357] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:33.394  spare
00:19:33.394   23:53:03	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:19:33.394  [2024-12-13 23:53:04.046558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:33.394  [2024-12-13 23:53:04.048564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:33.394  [2024-12-13 23:53:04.048882] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:19:33.394  [2024-12-13 23:53:04.049002] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:33.394  [2024-12-13 23:53:04.049160] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:19:33.394  [2024-12-13 23:53:04.049691] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:19:33.394  [2024-12-13 23:53:04.049824] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:19:33.394  [2024-12-13 23:53:04.050053] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:33.394   23:53:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:33.394    23:53:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:33.394    23:53:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:33.653   23:53:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:33.653    "name": "raid_bdev1",
00:19:33.653    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:33.653    "strip_size_kb": 0,
00:19:33.653    "state": "online",
00:19:33.653    "raid_level": "raid1",
00:19:33.653    "superblock": true,
00:19:33.653    "num_base_bdevs": 2,
00:19:33.653    "num_base_bdevs_discovered": 2,
00:19:33.653    "num_base_bdevs_operational": 2,
00:19:33.653    "base_bdevs_list": [
00:19:33.653      {
00:19:33.653        "name": "BaseBdev1",
00:19:33.653        "uuid": "122ff34b-9645-5893-b327-32c2e1986cb9",
00:19:33.653        "is_configured": true,
00:19:33.653        "data_offset": 2048,
00:19:33.653        "data_size": 63488
00:19:33.653      },
00:19:33.653      {
00:19:33.653        "name": "BaseBdev2",
00:19:33.653        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:33.653        "is_configured": true,
00:19:33.653        "data_offset": 2048,
00:19:33.653        "data_size": 63488
00:19:33.653      }
00:19:33.653    ]
00:19:33.653  }'
00:19:33.653   23:53:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:33.653   23:53:04	-- common/autotest_common.sh@10 -- # set +x
00:19:34.221    23:53:04	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:34.221    23:53:04	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:19:34.481  [2024-12-13 23:53:04.986822] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:34.481   23:53:05	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:19:34.481    23:53:05	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:34.481    23:53:05	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:19:34.740   23:53:05	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:19:34.740   23:53:05	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:19:34.740   23:53:05	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:19:34.740   23:53:05	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@12 -- # local i
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:34.740   23:53:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:19:34.740  [2024-12-13 23:53:05.446732] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:19:34.999  /dev/nbd0
00:19:34.999    23:53:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:34.999   23:53:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:34.999   23:53:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:34.999   23:53:05	-- common/autotest_common.sh@867 -- # local i
00:19:34.999   23:53:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:34.999   23:53:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:34.999   23:53:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:34.999   23:53:05	-- common/autotest_common.sh@871 -- # break
00:19:34.999   23:53:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:34.999   23:53:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:34.999   23:53:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:34.999  1+0 records in
00:19:34.999  1+0 records out
00:19:34.999  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311905 s, 13.1 MB/s
00:19:34.999    23:53:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:34.999   23:53:05	-- common/autotest_common.sh@884 -- # size=4096
00:19:34.999   23:53:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:34.999   23:53:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:34.999   23:53:05	-- common/autotest_common.sh@887 -- # return 0
00:19:34.999   23:53:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:34.999   23:53:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:34.999   23:53:05	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:19:34.999   23:53:05	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:19:34.999   23:53:05	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:19:40.305  63488+0 records in
00:19:40.305  63488+0 records out
00:19:40.305  32505856 bytes (33 MB, 31 MiB) copied, 4.69042 s, 6.9 MB/s
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@51 -- # local i
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:40.305  [2024-12-13 23:53:10.435133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:40.305    23:53:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@41 -- # break
00:19:40.305   23:53:10	-- bdev/nbd_common.sh@45 -- # return 0
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:40.305  [2024-12-13 23:53:10.702757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:40.305    23:53:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:40.305    23:53:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:40.305   23:53:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:40.305    "name": "raid_bdev1",
00:19:40.305    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:40.305    "strip_size_kb": 0,
00:19:40.305    "state": "online",
00:19:40.305    "raid_level": "raid1",
00:19:40.305    "superblock": true,
00:19:40.305    "num_base_bdevs": 2,
00:19:40.305    "num_base_bdevs_discovered": 1,
00:19:40.305    "num_base_bdevs_operational": 1,
00:19:40.305    "base_bdevs_list": [
00:19:40.305      {
00:19:40.305        "name": null,
00:19:40.305        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:40.305        "is_configured": false,
00:19:40.305        "data_offset": 2048,
00:19:40.305        "data_size": 63488
00:19:40.305      },
00:19:40.305      {
00:19:40.306        "name": "BaseBdev2",
00:19:40.306        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:40.306        "is_configured": true,
00:19:40.306        "data_offset": 2048,
00:19:40.306        "data_size": 63488
00:19:40.306      }
00:19:40.306    ]
00:19:40.306  }'
00:19:40.306   23:53:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:40.306   23:53:10	-- common/autotest_common.sh@10 -- # set +x
00:19:40.872   23:53:11	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:41.131  [2024-12-13 23:53:11.798964] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:41.131  [2024-12-13 23:53:11.799116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:41.131  [2024-12-13 23:53:11.812003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80
00:19:41.131  [2024-12-13 23:53:11.814082] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:41.131   23:53:11	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:19:42.506   23:53:12	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:42.506   23:53:12	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:42.506   23:53:12	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:42.506   23:53:12	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:42.506   23:53:12	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:42.506    23:53:12	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:42.506    23:53:12	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:42.506   23:53:13	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:42.506    "name": "raid_bdev1",
00:19:42.506    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:42.506    "strip_size_kb": 0,
00:19:42.506    "state": "online",
00:19:42.506    "raid_level": "raid1",
00:19:42.506    "superblock": true,
00:19:42.506    "num_base_bdevs": 2,
00:19:42.506    "num_base_bdevs_discovered": 2,
00:19:42.506    "num_base_bdevs_operational": 2,
00:19:42.506    "process": {
00:19:42.506      "type": "rebuild",
00:19:42.506      "target": "spare",
00:19:42.506      "progress": {
00:19:42.506        "blocks": 24576,
00:19:42.506        "percent": 38
00:19:42.506      }
00:19:42.506    },
00:19:42.506    "base_bdevs_list": [
00:19:42.506      {
00:19:42.506        "name": "spare",
00:19:42.506        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:42.506        "is_configured": true,
00:19:42.506        "data_offset": 2048,
00:19:42.506        "data_size": 63488
00:19:42.506      },
00:19:42.506      {
00:19:42.506        "name": "BaseBdev2",
00:19:42.506        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:42.506        "is_configured": true,
00:19:42.506        "data_offset": 2048,
00:19:42.506        "data_size": 63488
00:19:42.506      }
00:19:42.506    ]
00:19:42.506  }'
00:19:42.506    23:53:13	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:42.506   23:53:13	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:42.506    23:53:13	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:42.506   23:53:13	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:42.506   23:53:13	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:19:42.765  [2024-12-13 23:53:13.364205] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:19:42.765  [2024-12-13 23:53:13.424212] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:19:42.765  [2024-12-13 23:53:13.424291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:42.765   23:53:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:42.765    23:53:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:42.765    23:53:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:43.023   23:53:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:43.023    "name": "raid_bdev1",
00:19:43.023    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:43.023    "strip_size_kb": 0,
00:19:43.023    "state": "online",
00:19:43.023    "raid_level": "raid1",
00:19:43.023    "superblock": true,
00:19:43.023    "num_base_bdevs": 2,
00:19:43.023    "num_base_bdevs_discovered": 1,
00:19:43.023    "num_base_bdevs_operational": 1,
00:19:43.023    "base_bdevs_list": [
00:19:43.023      {
00:19:43.023        "name": null,
00:19:43.023        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:43.023        "is_configured": false,
00:19:43.023        "data_offset": 2048,
00:19:43.023        "data_size": 63488
00:19:43.023      },
00:19:43.023      {
00:19:43.023        "name": "BaseBdev2",
00:19:43.023        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:43.023        "is_configured": true,
00:19:43.023        "data_offset": 2048,
00:19:43.023        "data_size": 63488
00:19:43.023      }
00:19:43.023    ]
00:19:43.023  }'
00:19:43.023   23:53:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:43.023   23:53:13	-- common/autotest_common.sh@10 -- # set +x
00:19:43.590   23:53:14	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:43.590   23:53:14	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:43.590   23:53:14	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:43.590   23:53:14	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:43.590   23:53:14	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:43.590    23:53:14	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:43.590    23:53:14	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:43.848   23:53:14	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:43.848    "name": "raid_bdev1",
00:19:43.848    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:43.848    "strip_size_kb": 0,
00:19:43.848    "state": "online",
00:19:43.848    "raid_level": "raid1",
00:19:43.848    "superblock": true,
00:19:43.848    "num_base_bdevs": 2,
00:19:43.848    "num_base_bdevs_discovered": 1,
00:19:43.848    "num_base_bdevs_operational": 1,
00:19:43.848    "base_bdevs_list": [
00:19:43.848      {
00:19:43.848        "name": null,
00:19:43.848        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:43.848        "is_configured": false,
00:19:43.848        "data_offset": 2048,
00:19:43.848        "data_size": 63488
00:19:43.848      },
00:19:43.848      {
00:19:43.848        "name": "BaseBdev2",
00:19:43.848        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:43.848        "is_configured": true,
00:19:43.848        "data_offset": 2048,
00:19:43.848        "data_size": 63488
00:19:43.848      }
00:19:43.848    ]
00:19:43.848  }'
00:19:43.848    23:53:14	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:43.848   23:53:14	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:43.848    23:53:14	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:44.107   23:53:14	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:44.107   23:53:14	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:44.107  [2024-12-13 23:53:14.833518] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:44.107  [2024-12-13 23:53:14.833556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:44.365  [2024-12-13 23:53:14.844120] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020
00:19:44.365  [2024-12-13 23:53:14.845931] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:44.365   23:53:14	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:19:45.300   23:53:15	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:45.300   23:53:15	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:45.300   23:53:15	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:45.300   23:53:15	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:45.300   23:53:15	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:45.300    23:53:15	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:45.300    23:53:15	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:45.559    "name": "raid_bdev1",
00:19:45.559    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:45.559    "strip_size_kb": 0,
00:19:45.559    "state": "online",
00:19:45.559    "raid_level": "raid1",
00:19:45.559    "superblock": true,
00:19:45.559    "num_base_bdevs": 2,
00:19:45.559    "num_base_bdevs_discovered": 2,
00:19:45.559    "num_base_bdevs_operational": 2,
00:19:45.559    "process": {
00:19:45.559      "type": "rebuild",
00:19:45.559      "target": "spare",
00:19:45.559      "progress": {
00:19:45.559        "blocks": 24576,
00:19:45.559        "percent": 38
00:19:45.559      }
00:19:45.559    },
00:19:45.559    "base_bdevs_list": [
00:19:45.559      {
00:19:45.559        "name": "spare",
00:19:45.559        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:45.559        "is_configured": true,
00:19:45.559        "data_offset": 2048,
00:19:45.559        "data_size": 63488
00:19:45.559      },
00:19:45.559      {
00:19:45.559        "name": "BaseBdev2",
00:19:45.559        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:45.559        "is_configured": true,
00:19:45.559        "data_offset": 2048,
00:19:45.559        "data_size": 63488
00:19:45.559      }
00:19:45.559    ]
00:19:45.559  }'
00:19:45.559    23:53:16	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:45.559    23:53:16	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:19:45.559  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@657 -- # local timeout=407
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:45.559   23:53:16	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:45.559    23:53:16	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:45.559    23:53:16	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:45.817   23:53:16	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:45.817    "name": "raid_bdev1",
00:19:45.817    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:45.817    "strip_size_kb": 0,
00:19:45.817    "state": "online",
00:19:45.817    "raid_level": "raid1",
00:19:45.817    "superblock": true,
00:19:45.817    "num_base_bdevs": 2,
00:19:45.817    "num_base_bdevs_discovered": 2,
00:19:45.817    "num_base_bdevs_operational": 2,
00:19:45.817    "process": {
00:19:45.817      "type": "rebuild",
00:19:45.817      "target": "spare",
00:19:45.817      "progress": {
00:19:45.817        "blocks": 30720,
00:19:45.817        "percent": 48
00:19:45.817      }
00:19:45.817    },
00:19:45.817    "base_bdevs_list": [
00:19:45.817      {
00:19:45.817        "name": "spare",
00:19:45.817        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:45.817        "is_configured": true,
00:19:45.817        "data_offset": 2048,
00:19:45.817        "data_size": 63488
00:19:45.817      },
00:19:45.817      {
00:19:45.817        "name": "BaseBdev2",
00:19:45.817        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:45.817        "is_configured": true,
00:19:45.817        "data_offset": 2048,
00:19:45.817        "data_size": 63488
00:19:45.817      }
00:19:45.817    ]
00:19:45.817  }'
00:19:45.817    23:53:16	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:45.817   23:53:16	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:45.817    23:53:16	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:46.076   23:53:16	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:46.076   23:53:16	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:47.012   23:53:17	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:47.012    23:53:17	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:47.012    23:53:17	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:47.271   23:53:17	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:47.271    "name": "raid_bdev1",
00:19:47.271    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:47.271    "strip_size_kb": 0,
00:19:47.271    "state": "online",
00:19:47.271    "raid_level": "raid1",
00:19:47.271    "superblock": true,
00:19:47.271    "num_base_bdevs": 2,
00:19:47.271    "num_base_bdevs_discovered": 2,
00:19:47.271    "num_base_bdevs_operational": 2,
00:19:47.271    "process": {
00:19:47.271      "type": "rebuild",
00:19:47.271      "target": "spare",
00:19:47.271      "progress": {
00:19:47.271        "blocks": 59392,
00:19:47.271        "percent": 93
00:19:47.271      }
00:19:47.271    },
00:19:47.271    "base_bdevs_list": [
00:19:47.271      {
00:19:47.271        "name": "spare",
00:19:47.271        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:47.271        "is_configured": true,
00:19:47.271        "data_offset": 2048,
00:19:47.271        "data_size": 63488
00:19:47.271      },
00:19:47.271      {
00:19:47.271        "name": "BaseBdev2",
00:19:47.271        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:47.271        "is_configured": true,
00:19:47.271        "data_offset": 2048,
00:19:47.271        "data_size": 63488
00:19:47.271      }
00:19:47.271    ]
00:19:47.271  }'
00:19:47.271    23:53:17	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:47.271   23:53:17	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:47.271    23:53:17	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:47.271   23:53:17	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:47.271   23:53:17	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:47.271  [2024-12-13 23:53:17.963992] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:19:47.271  [2024-12-13 23:53:17.964060] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:19:47.271  [2024-12-13 23:53:17.964186] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:48.207   23:53:18	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:48.207    23:53:18	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:48.207    23:53:18	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:48.466   23:53:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:48.466    "name": "raid_bdev1",
00:19:48.466    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:48.466    "strip_size_kb": 0,
00:19:48.466    "state": "online",
00:19:48.466    "raid_level": "raid1",
00:19:48.466    "superblock": true,
00:19:48.466    "num_base_bdevs": 2,
00:19:48.466    "num_base_bdevs_discovered": 2,
00:19:48.466    "num_base_bdevs_operational": 2,
00:19:48.466    "base_bdevs_list": [
00:19:48.466      {
00:19:48.466        "name": "spare",
00:19:48.466        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:48.466        "is_configured": true,
00:19:48.466        "data_offset": 2048,
00:19:48.466        "data_size": 63488
00:19:48.466      },
00:19:48.466      {
00:19:48.466        "name": "BaseBdev2",
00:19:48.466        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:48.466        "is_configured": true,
00:19:48.466        "data_offset": 2048,
00:19:48.466        "data_size": 63488
00:19:48.466      }
00:19:48.466    ]
00:19:48.466  }'
00:19:48.466    23:53:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:48.466   23:53:19	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:19:48.466    23:53:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@660 -- # break
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:48.725    23:53:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:48.725    23:53:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:48.725   23:53:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:48.725    "name": "raid_bdev1",
00:19:48.725    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:48.725    "strip_size_kb": 0,
00:19:48.725    "state": "online",
00:19:48.725    "raid_level": "raid1",
00:19:48.725    "superblock": true,
00:19:48.725    "num_base_bdevs": 2,
00:19:48.725    "num_base_bdevs_discovered": 2,
00:19:48.725    "num_base_bdevs_operational": 2,
00:19:48.725    "base_bdevs_list": [
00:19:48.725      {
00:19:48.725        "name": "spare",
00:19:48.725        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:48.725        "is_configured": true,
00:19:48.725        "data_offset": 2048,
00:19:48.725        "data_size": 63488
00:19:48.725      },
00:19:48.725      {
00:19:48.725        "name": "BaseBdev2",
00:19:48.725        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:48.725        "is_configured": true,
00:19:48.725        "data_offset": 2048,
00:19:48.725        "data_size": 63488
00:19:48.725      }
00:19:48.725    ]
00:19:48.725  }'
00:19:48.725    23:53:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:48.983    23:53:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:48.983   23:53:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:48.983    23:53:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:48.983    23:53:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:49.241   23:53:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:49.241    "name": "raid_bdev1",
00:19:49.241    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:49.241    "strip_size_kb": 0,
00:19:49.241    "state": "online",
00:19:49.241    "raid_level": "raid1",
00:19:49.241    "superblock": true,
00:19:49.241    "num_base_bdevs": 2,
00:19:49.241    "num_base_bdevs_discovered": 2,
00:19:49.241    "num_base_bdevs_operational": 2,
00:19:49.241    "base_bdevs_list": [
00:19:49.241      {
00:19:49.241        "name": "spare",
00:19:49.241        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:49.241        "is_configured": true,
00:19:49.241        "data_offset": 2048,
00:19:49.241        "data_size": 63488
00:19:49.241      },
00:19:49.241      {
00:19:49.241        "name": "BaseBdev2",
00:19:49.241        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:49.241        "is_configured": true,
00:19:49.241        "data_offset": 2048,
00:19:49.241        "data_size": 63488
00:19:49.241      }
00:19:49.241    ]
00:19:49.241  }'
00:19:49.241   23:53:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:49.242   23:53:19	-- common/autotest_common.sh@10 -- # set +x
00:19:49.808   23:53:20	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:50.066  [2024-12-13 23:53:20.603173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:50.066  [2024-12-13 23:53:20.603203] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:50.066  [2024-12-13 23:53:20.603291] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:50.066  [2024-12-13 23:53:20.603363] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:50.066  [2024-12-13 23:53:20.603375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:19:50.066    23:53:20	-- bdev/bdev_raid.sh@671 -- # jq length
00:19:50.066    23:53:20	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:50.324   23:53:20	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:19:50.324   23:53:20	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:19:50.324   23:53:20	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@12 -- # local i
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:50.324   23:53:20	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:19:50.583  /dev/nbd0
00:19:50.583    23:53:21	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:50.583   23:53:21	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:50.583   23:53:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:50.583   23:53:21	-- common/autotest_common.sh@867 -- # local i
00:19:50.583   23:53:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:50.583   23:53:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:50.583   23:53:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:50.583   23:53:21	-- common/autotest_common.sh@871 -- # break
00:19:50.583   23:53:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:50.583   23:53:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:50.583   23:53:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:50.583  1+0 records in
00:19:50.583  1+0 records out
00:19:50.583  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549268 s, 7.5 MB/s
00:19:50.583    23:53:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:50.583   23:53:21	-- common/autotest_common.sh@884 -- # size=4096
00:19:50.583   23:53:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:50.583   23:53:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:50.583   23:53:21	-- common/autotest_common.sh@887 -- # return 0
00:19:50.583   23:53:21	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:50.583   23:53:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:50.583   23:53:21	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:19:50.841  /dev/nbd1
00:19:50.841    23:53:21	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:50.841   23:53:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:19:50.841   23:53:21	-- common/autotest_common.sh@867 -- # local i
00:19:50.841   23:53:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:50.841   23:53:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:50.841   23:53:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:19:50.841   23:53:21	-- common/autotest_common.sh@871 -- # break
00:19:50.841   23:53:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:50.841   23:53:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:50.841   23:53:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:50.841  1+0 records in
00:19:50.841  1+0 records out
00:19:50.841  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048256 s, 8.5 MB/s
00:19:50.841    23:53:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:50.841   23:53:21	-- common/autotest_common.sh@884 -- # size=4096
00:19:50.841   23:53:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:50.841   23:53:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:50.841   23:53:21	-- common/autotest_common.sh@887 -- # return 0
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:50.841   23:53:21	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:19:50.841   23:53:21	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@51 -- # local i
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:50.841   23:53:21	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:51.100    23:53:21	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@41 -- # break
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@45 -- # return 0
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:51.100   23:53:21	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:19:51.358    23:53:21	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@41 -- # break
00:19:51.358   23:53:21	-- bdev/nbd_common.sh@45 -- # return 0
00:19:51.358   23:53:21	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:19:51.358   23:53:22	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:51.358   23:53:22	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:19:51.358   23:53:22	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:19:51.617   23:53:22	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:19:51.876  [2024-12-13 23:53:22.402309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:19:51.876  [2024-12-13 23:53:22.402394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:51.876  [2024-12-13 23:53:22.402431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:19:51.876  [2024-12-13 23:53:22.402459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:51.876  [2024-12-13 23:53:22.404791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:51.876  [2024-12-13 23:53:22.404862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:19:51.876  [2024-12-13 23:53:22.404958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:19:51.876  [2024-12-13 23:53:22.405016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:51.876  BaseBdev1
00:19:51.876   23:53:22	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:51.876   23:53:22	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:19:51.876   23:53:22	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:19:51.876   23:53:22	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:19:52.135  [2024-12-13 23:53:22.818843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:19:52.135  [2024-12-13 23:53:22.818912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:52.135  [2024-12-13 23:53:22.818947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:19:52.135  [2024-12-13 23:53:22.818976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:52.135  [2024-12-13 23:53:22.819372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:52.135  [2024-12-13 23:53:22.819447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:19:52.135  [2024-12-13 23:53:22.819554] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:19:52.135  [2024-12-13 23:53:22.819570] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:19:52.135  [2024-12-13 23:53:22.819592] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:52.135  [2024-12-13 23:53:22.819609] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring
00:19:52.135  [2024-12-13 23:53:22.819668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:52.135  BaseBdev2
00:19:52.135   23:53:22	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:19:52.393   23:53:23	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:52.652  [2024-12-13 23:53:23.170890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:52.652  [2024-12-13 23:53:23.170944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:52.652  [2024-12-13 23:53:23.170976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:19:52.652  [2024-12-13 23:53:23.170995] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:52.652  [2024-12-13 23:53:23.171411] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:52.652  [2024-12-13 23:53:23.171475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:52.652  [2024-12-13 23:53:23.171586] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:19:52.652  [2024-12-13 23:53:23.171624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:52.652  spare
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:52.652   23:53:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:52.652    23:53:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:52.652    23:53:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:52.652  [2024-12-13 23:53:23.271720] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880
00:19:52.652  [2024-12-13 23:53:23.271741] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:52.652  [2024-12-13 23:53:23.271858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0
00:19:52.652  [2024-12-13 23:53:23.272189] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880
00:19:52.652  [2024-12-13 23:53:23.272211] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880
00:19:52.652  [2024-12-13 23:53:23.272323] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:52.911   23:53:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:52.911    "name": "raid_bdev1",
00:19:52.911    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:52.911    "strip_size_kb": 0,
00:19:52.911    "state": "online",
00:19:52.911    "raid_level": "raid1",
00:19:52.911    "superblock": true,
00:19:52.911    "num_base_bdevs": 2,
00:19:52.911    "num_base_bdevs_discovered": 2,
00:19:52.911    "num_base_bdevs_operational": 2,
00:19:52.911    "base_bdevs_list": [
00:19:52.911      {
00:19:52.911        "name": "spare",
00:19:52.911        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:52.911        "is_configured": true,
00:19:52.911        "data_offset": 2048,
00:19:52.911        "data_size": 63488
00:19:52.911      },
00:19:52.911      {
00:19:52.911        "name": "BaseBdev2",
00:19:52.911        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:52.911        "is_configured": true,
00:19:52.911        "data_offset": 2048,
00:19:52.911        "data_size": 63488
00:19:52.911      }
00:19:52.911    ]
00:19:52.911  }'
00:19:52.911   23:53:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:52.911   23:53:23	-- common/autotest_common.sh@10 -- # set +x
00:19:53.478   23:53:24	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:53.478   23:53:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:53.478   23:53:24	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:53.478   23:53:24	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:53.478   23:53:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:53.478    23:53:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:53.478    23:53:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:53.736   23:53:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:53.736    "name": "raid_bdev1",
00:19:53.736    "uuid": "f5f98688-7425-4f17-9334-fc171cef091b",
00:19:53.736    "strip_size_kb": 0,
00:19:53.736    "state": "online",
00:19:53.736    "raid_level": "raid1",
00:19:53.736    "superblock": true,
00:19:53.736    "num_base_bdevs": 2,
00:19:53.736    "num_base_bdevs_discovered": 2,
00:19:53.736    "num_base_bdevs_operational": 2,
00:19:53.736    "base_bdevs_list": [
00:19:53.736      {
00:19:53.736        "name": "spare",
00:19:53.736        "uuid": "3bb3d083-9581-5810-8521-cd8464532def",
00:19:53.736        "is_configured": true,
00:19:53.736        "data_offset": 2048,
00:19:53.736        "data_size": 63488
00:19:53.736      },
00:19:53.736      {
00:19:53.736        "name": "BaseBdev2",
00:19:53.736        "uuid": "b5efc7b1-1bba-5057-84bb-cbddfc60bc77",
00:19:53.736        "is_configured": true,
00:19:53.736        "data_offset": 2048,
00:19:53.736        "data_size": 63488
00:19:53.736      }
00:19:53.736    ]
00:19:53.736  }'
00:19:53.736    23:53:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:53.736   23:53:24	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:53.736    23:53:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:53.736   23:53:24	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:53.736    23:53:24	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:53.736    23:53:24	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:19:53.994   23:53:24	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:19:53.994   23:53:24	-- bdev/bdev_raid.sh@709 -- # killprocess 122839
00:19:53.994   23:53:24	-- common/autotest_common.sh@936 -- # '[' -z 122839 ']'
00:19:53.994   23:53:24	-- common/autotest_common.sh@940 -- # kill -0 122839
00:19:53.994    23:53:24	-- common/autotest_common.sh@941 -- # uname
00:19:53.994   23:53:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:53.994    23:53:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122839
00:19:53.994  killing process with pid 122839
00:19:53.994  Received shutdown signal, test time was about 60.000000 seconds
00:19:53.994  
00:19:53.994                                                                                                  Latency(us)
00:19:53.994  
[2024-12-13T23:53:24.726Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:53.994  
[2024-12-13T23:53:24.726Z]  ===================================================================================================================
00:19:53.994  
[2024-12-13T23:53:24.726Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:19:53.994   23:53:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:53.994   23:53:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:53.994   23:53:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122839'
00:19:53.994   23:53:24	-- common/autotest_common.sh@955 -- # kill 122839
00:19:53.994  [2024-12-13 23:53:24.602200] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:53.994   23:53:24	-- common/autotest_common.sh@960 -- # wait 122839
00:19:53.994  [2024-12-13 23:53:24.602258] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:53.994  [2024-12-13 23:53:24.602309] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:53.994  [2024-12-13 23:53:24.602319] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline
00:19:54.253  [2024-12-13 23:53:24.801322] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:55.188  ************************************
00:19:55.188  END TEST raid_rebuild_test_sb
00:19:55.188  ************************************
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@711 -- # return 0
00:19:55.188  
00:19:55.188  real	0m24.564s
00:19:55.188  user	0m35.459s
00:19:55.188  sys	0m3.800s
00:19:55.188   23:53:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:55.188   23:53:25	-- common/autotest_common.sh@10 -- # set +x
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true
00:19:55.188   23:53:25	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:19:55.188   23:53:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:55.188   23:53:25	-- common/autotest_common.sh@10 -- # set +x
00:19:55.188  ************************************
00:19:55.188  START TEST raid_rebuild_test_io
00:19:55.188  ************************************
00:19:55.188   23:53:25	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:55.188    23:53:25	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:19:55.188   23:53:25	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@544 -- # raid_pid=123461
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@545 -- # waitforlisten 123461 /var/tmp/spdk-raid.sock
00:19:55.189   23:53:25	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:19:55.189   23:53:25	-- common/autotest_common.sh@829 -- # '[' -z 123461 ']'
00:19:55.189   23:53:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:19:55.189   23:53:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:55.189  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:19:55.189   23:53:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:19:55.189   23:53:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:55.189   23:53:25	-- common/autotest_common.sh@10 -- # set +x
00:19:55.447  [2024-12-13 23:53:25.958248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:55.447  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:55.447  Zero copy mechanism will not be used.
00:19:55.447  [2024-12-13 23:53:25.958488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123461 ]
00:19:55.447  [2024-12-13 23:53:26.128562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:55.706  [2024-12-13 23:53:26.313932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:55.965  [2024-12-13 23:53:26.498639] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:19:56.224   23:53:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:56.224   23:53:26	-- common/autotest_common.sh@862 -- # return 0
00:19:56.224   23:53:26	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:56.224   23:53:26	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:56.224   23:53:26	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:19:56.482  BaseBdev1
00:19:56.482   23:53:27	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:56.482   23:53:27	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:56.482   23:53:27	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:19:56.742  BaseBdev2
00:19:56.742   23:53:27	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:19:57.001  spare_malloc
00:19:57.001   23:53:27	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:19:57.260  spare_delay
00:19:57.260   23:53:27	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:57.260  [2024-12-13 23:53:27.939574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:57.260  [2024-12-13 23:53:27.939667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:57.260  [2024-12-13 23:53:27.939712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80
00:19:57.260  [2024-12-13 23:53:27.939761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:57.260  [2024-12-13 23:53:27.942150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:57.260  [2024-12-13 23:53:27.942199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:57.260  spare
00:19:57.260   23:53:27	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:19:57.518  [2024-12-13 23:53:28.115667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:57.518  [2024-12-13 23:53:28.117614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:57.518  [2024-12-13 23:53:28.117709] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180
00:19:57.518  [2024-12-13 23:53:28.117722] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:19:57.518  [2024-12-13 23:53:28.117830] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:19:57.518  [2024-12-13 23:53:28.118200] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180
00:19:57.518  [2024-12-13 23:53:28.118225] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180
00:19:57.518  [2024-12-13 23:53:28.118402] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:57.518   23:53:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:57.518    23:53:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:57.518    23:53:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:57.776   23:53:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:57.776    "name": "raid_bdev1",
00:19:57.776    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:19:57.776    "strip_size_kb": 0,
00:19:57.776    "state": "online",
00:19:57.776    "raid_level": "raid1",
00:19:57.776    "superblock": false,
00:19:57.776    "num_base_bdevs": 2,
00:19:57.776    "num_base_bdevs_discovered": 2,
00:19:57.776    "num_base_bdevs_operational": 2,
00:19:57.776    "base_bdevs_list": [
00:19:57.776      {
00:19:57.776        "name": "BaseBdev1",
00:19:57.776        "uuid": "45cb1fa2-0b5f-430b-89f8-0245046a18e3",
00:19:57.776        "is_configured": true,
00:19:57.776        "data_offset": 0,
00:19:57.776        "data_size": 65536
00:19:57.776      },
00:19:57.776      {
00:19:57.776        "name": "BaseBdev2",
00:19:57.776        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:19:57.776        "is_configured": true,
00:19:57.776        "data_offset": 0,
00:19:57.776        "data_size": 65536
00:19:57.776      }
00:19:57.776    ]
00:19:57.776  }'
00:19:57.776   23:53:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:57.776   23:53:28	-- common/autotest_common.sh@10 -- # set +x
00:19:58.343    23:53:28	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:58.343    23:53:28	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:19:58.601  [2024-12-13 23:53:29.203969] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:58.601   23:53:29	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:19:58.601    23:53:29	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:19:58.601    23:53:29	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:58.860   23:53:29	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:19:58.860   23:53:29	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:19:58.860   23:53:29	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:58.860   23:53:29	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:19:58.860  [2024-12-13 23:53:29.527107] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:19:58.860  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:58.860  Zero copy mechanism will not be used.
00:19:58.860  Running I/O for 60 seconds...
00:19:59.118  [2024-12-13 23:53:29.643974] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:59.118  [2024-12-13 23:53:29.649989] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:59.118   23:53:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:59.118    23:53:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:59.118    23:53:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:59.376   23:53:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:59.376    "name": "raid_bdev1",
00:19:59.376    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:19:59.376    "strip_size_kb": 0,
00:19:59.376    "state": "online",
00:19:59.376    "raid_level": "raid1",
00:19:59.376    "superblock": false,
00:19:59.376    "num_base_bdevs": 2,
00:19:59.376    "num_base_bdevs_discovered": 1,
00:19:59.376    "num_base_bdevs_operational": 1,
00:19:59.376    "base_bdevs_list": [
00:19:59.376      {
00:19:59.376        "name": null,
00:19:59.376        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:59.376        "is_configured": false,
00:19:59.376        "data_offset": 0,
00:19:59.376        "data_size": 65536
00:19:59.376      },
00:19:59.376      {
00:19:59.376        "name": "BaseBdev2",
00:19:59.376        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:19:59.376        "is_configured": true,
00:19:59.376        "data_offset": 0,
00:19:59.376        "data_size": 65536
00:19:59.376      }
00:19:59.376    ]
00:19:59.377  }'
00:19:59.377   23:53:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:59.377   23:53:29	-- common/autotest_common.sh@10 -- # set +x
00:19:59.635   23:53:30	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:59.894  [2024-12-13 23:53:30.624752] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:59.894  [2024-12-13 23:53:30.624819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:00.152  [2024-12-13 23:53:30.665393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:20:00.152  [2024-12-13 23:53:30.667420] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:00.152   23:53:30	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:20:00.152  [2024-12-13 23:53:30.775360] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:00.152  [2024-12-13 23:53:30.775785] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:00.421  [2024-12-13 23:53:30.895519] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:00.421  [2024-12-13 23:53:30.895654] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:00.726  [2024-12-13 23:53:31.206727] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:20:00.726  [2024-12-13 23:53:31.338833] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:20:00.726  [2024-12-13 23:53:31.338993] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:20:00.995  [2024-12-13 23:53:31.661350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:20:00.995   23:53:31	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:00.995   23:53:31	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:00.995   23:53:31	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:00.995   23:53:31	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:00.995   23:53:31	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:00.995    23:53:31	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:00.995    23:53:31	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:01.254  [2024-12-13 23:53:31.875085] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:20:01.254   23:53:31	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:01.254    "name": "raid_bdev1",
00:20:01.254    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:01.254    "strip_size_kb": 0,
00:20:01.254    "state": "online",
00:20:01.254    "raid_level": "raid1",
00:20:01.254    "superblock": false,
00:20:01.254    "num_base_bdevs": 2,
00:20:01.254    "num_base_bdevs_discovered": 2,
00:20:01.254    "num_base_bdevs_operational": 2,
00:20:01.254    "process": {
00:20:01.254      "type": "rebuild",
00:20:01.254      "target": "spare",
00:20:01.254      "progress": {
00:20:01.254        "blocks": 16384,
00:20:01.254        "percent": 25
00:20:01.254      }
00:20:01.254    },
00:20:01.254    "base_bdevs_list": [
00:20:01.254      {
00:20:01.254        "name": "spare",
00:20:01.254        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:01.254        "is_configured": true,
00:20:01.254        "data_offset": 0,
00:20:01.254        "data_size": 65536
00:20:01.254      },
00:20:01.254      {
00:20:01.254        "name": "BaseBdev2",
00:20:01.254        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:01.254        "is_configured": true,
00:20:01.254        "data_offset": 0,
00:20:01.254        "data_size": 65536
00:20:01.254      }
00:20:01.254    ]
00:20:01.254  }'
00:20:01.254    23:53:31	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:01.512   23:53:31	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:01.512    23:53:31	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:01.512   23:53:32	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:01.512   23:53:32	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:20:01.512  [2024-12-13 23:53:32.203031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:20:01.770  [2024-12-13 23:53:32.270146] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:20:01.770  [2024-12-13 23:53:32.441499] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:20:01.770  [2024-12-13 23:53:32.449001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:01.770  [2024-12-13 23:53:32.470122] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:01.770   23:53:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:01.770    23:53:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:01.770    23:53:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:02.028   23:53:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:02.028    "name": "raid_bdev1",
00:20:02.028    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:02.028    "strip_size_kb": 0,
00:20:02.028    "state": "online",
00:20:02.028    "raid_level": "raid1",
00:20:02.028    "superblock": false,
00:20:02.028    "num_base_bdevs": 2,
00:20:02.028    "num_base_bdevs_discovered": 1,
00:20:02.028    "num_base_bdevs_operational": 1,
00:20:02.028    "base_bdevs_list": [
00:20:02.028      {
00:20:02.028        "name": null,
00:20:02.028        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:02.028        "is_configured": false,
00:20:02.028        "data_offset": 0,
00:20:02.028        "data_size": 65536
00:20:02.028      },
00:20:02.028      {
00:20:02.028        "name": "BaseBdev2",
00:20:02.028        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:02.028        "is_configured": true,
00:20:02.028        "data_offset": 0,
00:20:02.028        "data_size": 65536
00:20:02.028      }
00:20:02.028    ]
00:20:02.028  }'
00:20:02.028   23:53:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:02.028   23:53:32	-- common/autotest_common.sh@10 -- # set +x
00:20:02.595   23:53:33	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:02.595   23:53:33	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:02.595   23:53:33	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:02.595   23:53:33	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:02.595   23:53:33	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:02.595    23:53:33	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:02.595    23:53:33	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:02.853   23:53:33	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:02.853    "name": "raid_bdev1",
00:20:02.853    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:02.853    "strip_size_kb": 0,
00:20:02.853    "state": "online",
00:20:02.853    "raid_level": "raid1",
00:20:02.853    "superblock": false,
00:20:02.853    "num_base_bdevs": 2,
00:20:02.853    "num_base_bdevs_discovered": 1,
00:20:02.853    "num_base_bdevs_operational": 1,
00:20:02.853    "base_bdevs_list": [
00:20:02.853      {
00:20:02.853        "name": null,
00:20:02.853        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:02.853        "is_configured": false,
00:20:02.853        "data_offset": 0,
00:20:02.853        "data_size": 65536
00:20:02.853      },
00:20:02.853      {
00:20:02.853        "name": "BaseBdev2",
00:20:02.853        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:02.853        "is_configured": true,
00:20:02.853        "data_offset": 0,
00:20:02.853        "data_size": 65536
00:20:02.853      }
00:20:02.853    ]
00:20:02.853  }'
00:20:02.853    23:53:33	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:02.853   23:53:33	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:02.853    23:53:33	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:02.853   23:53:33	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:02.853   23:53:33	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:03.112  [2024-12-13 23:53:33.816193] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:03.112  [2024-12-13 23:53:33.816241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:03.371   23:53:33	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:20:03.371  [2024-12-13 23:53:33.860778] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:20:03.371  [2024-12-13 23:53:33.862749] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:03.371  [2024-12-13 23:53:33.976258] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:03.371  [2024-12-13 23:53:33.976654] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:03.629  [2024-12-13 23:53:34.190509] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:03.629  [2024-12-13 23:53:34.190690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:03.888  [2024-12-13 23:53:34.436493] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:20:04.146  [2024-12-13 23:53:34.643669] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:20:04.146  [2024-12-13 23:53:34.643802] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:20:04.146   23:53:34	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:04.146   23:53:34	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:04.146   23:53:34	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:04.146   23:53:34	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:04.146   23:53:34	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:04.146    23:53:34	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:04.146    23:53:34	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:04.405  [2024-12-13 23:53:34.983835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:20:04.405   23:53:35	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:04.405    "name": "raid_bdev1",
00:20:04.405    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:04.405    "strip_size_kb": 0,
00:20:04.405    "state": "online",
00:20:04.405    "raid_level": "raid1",
00:20:04.405    "superblock": false,
00:20:04.405    "num_base_bdevs": 2,
00:20:04.405    "num_base_bdevs_discovered": 2,
00:20:04.405    "num_base_bdevs_operational": 2,
00:20:04.405    "process": {
00:20:04.405      "type": "rebuild",
00:20:04.405      "target": "spare",
00:20:04.405      "progress": {
00:20:04.405        "blocks": 14336,
00:20:04.405        "percent": 21
00:20:04.405      }
00:20:04.405    },
00:20:04.405    "base_bdevs_list": [
00:20:04.405      {
00:20:04.405        "name": "spare",
00:20:04.405        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:04.405        "is_configured": true,
00:20:04.405        "data_offset": 0,
00:20:04.405        "data_size": 65536
00:20:04.405      },
00:20:04.405      {
00:20:04.405        "name": "BaseBdev2",
00:20:04.405        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:04.405        "is_configured": true,
00:20:04.405        "data_offset": 0,
00:20:04.405        "data_size": 65536
00:20:04.405      }
00:20:04.405    ]
00:20:04.405  }'
00:20:04.405    23:53:35	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:04.664    23:53:35	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@657 -- # local timeout=426
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:04.664   23:53:35	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:04.664    23:53:35	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:04.664    23:53:35	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:04.923  [2024-12-13 23:53:35.447513] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:20:04.923   23:53:35	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:04.923    "name": "raid_bdev1",
00:20:04.923    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:04.923    "strip_size_kb": 0,
00:20:04.923    "state": "online",
00:20:04.923    "raid_level": "raid1",
00:20:04.923    "superblock": false,
00:20:04.923    "num_base_bdevs": 2,
00:20:04.923    "num_base_bdevs_discovered": 2,
00:20:04.923    "num_base_bdevs_operational": 2,
00:20:04.923    "process": {
00:20:04.923      "type": "rebuild",
00:20:04.923      "target": "spare",
00:20:04.923      "progress": {
00:20:04.923        "blocks": 20480,
00:20:04.923        "percent": 31
00:20:04.923      }
00:20:04.923    },
00:20:04.923    "base_bdevs_list": [
00:20:04.923      {
00:20:04.923        "name": "spare",
00:20:04.923        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:04.923        "is_configured": true,
00:20:04.923        "data_offset": 0,
00:20:04.923        "data_size": 65536
00:20:04.923      },
00:20:04.923      {
00:20:04.923        "name": "BaseBdev2",
00:20:04.923        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:04.923        "is_configured": true,
00:20:04.923        "data_offset": 0,
00:20:04.923        "data_size": 65536
00:20:04.923      }
00:20:04.923    ]
00:20:04.923  }'
00:20:04.923    23:53:35	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:04.923   23:53:35	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:04.923    23:53:35	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:04.923   23:53:35	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:04.923   23:53:35	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:05.490  [2024-12-13 23:53:36.179079] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:06.058   23:53:36	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:06.058    23:53:36	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:06.058    23:53:36	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:06.316   23:53:36	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:06.316    "name": "raid_bdev1",
00:20:06.316    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:06.316    "strip_size_kb": 0,
00:20:06.316    "state": "online",
00:20:06.316    "raid_level": "raid1",
00:20:06.316    "superblock": false,
00:20:06.316    "num_base_bdevs": 2,
00:20:06.316    "num_base_bdevs_discovered": 2,
00:20:06.316    "num_base_bdevs_operational": 2,
00:20:06.316    "process": {
00:20:06.316      "type": "rebuild",
00:20:06.316      "target": "spare",
00:20:06.316      "progress": {
00:20:06.316        "blocks": 45056,
00:20:06.316        "percent": 68
00:20:06.316      }
00:20:06.316    },
00:20:06.316    "base_bdevs_list": [
00:20:06.316      {
00:20:06.316        "name": "spare",
00:20:06.316        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:06.316        "is_configured": true,
00:20:06.316        "data_offset": 0,
00:20:06.316        "data_size": 65536
00:20:06.316      },
00:20:06.316      {
00:20:06.316        "name": "BaseBdev2",
00:20:06.316        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:06.316        "is_configured": true,
00:20:06.316        "data_offset": 0,
00:20:06.316        "data_size": 65536
00:20:06.316      }
00:20:06.316    ]
00:20:06.316  }'
00:20:06.316    23:53:36	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:06.316   23:53:36	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:06.316    23:53:36	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:06.316   23:53:36	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:06.316   23:53:36	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:07.252  [2024-12-13 23:53:37.806989] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:07.252   23:53:37	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:07.252    23:53:37	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:07.252    23:53:37	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:07.252  [2024-12-13 23:53:37.912709] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:20:07.252  [2024-12-13 23:53:37.914437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:07.510   23:53:38	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:07.510    "name": "raid_bdev1",
00:20:07.510    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:07.510    "strip_size_kb": 0,
00:20:07.510    "state": "online",
00:20:07.510    "raid_level": "raid1",
00:20:07.510    "superblock": false,
00:20:07.510    "num_base_bdevs": 2,
00:20:07.510    "num_base_bdevs_discovered": 2,
00:20:07.510    "num_base_bdevs_operational": 2,
00:20:07.510    "base_bdevs_list": [
00:20:07.510      {
00:20:07.510        "name": "spare",
00:20:07.510        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:07.510        "is_configured": true,
00:20:07.510        "data_offset": 0,
00:20:07.510        "data_size": 65536
00:20:07.510      },
00:20:07.511      {
00:20:07.511        "name": "BaseBdev2",
00:20:07.511        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:07.511        "is_configured": true,
00:20:07.511        "data_offset": 0,
00:20:07.511        "data_size": 65536
00:20:07.511      }
00:20:07.511    ]
00:20:07.511  }'
00:20:07.511    23:53:38	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:20:07.511    23:53:38	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@660 -- # break
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:07.511   23:53:38	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:07.511    23:53:38	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:07.511    23:53:38	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:07.769   23:53:38	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:07.769    "name": "raid_bdev1",
00:20:07.769    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:07.769    "strip_size_kb": 0,
00:20:07.769    "state": "online",
00:20:07.769    "raid_level": "raid1",
00:20:07.769    "superblock": false,
00:20:07.769    "num_base_bdevs": 2,
00:20:07.769    "num_base_bdevs_discovered": 2,
00:20:07.769    "num_base_bdevs_operational": 2,
00:20:07.769    "base_bdevs_list": [
00:20:07.769      {
00:20:07.769        "name": "spare",
00:20:07.769        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:07.769        "is_configured": true,
00:20:07.769        "data_offset": 0,
00:20:07.769        "data_size": 65536
00:20:07.769      },
00:20:07.769      {
00:20:07.769        "name": "BaseBdev2",
00:20:07.769        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:07.769        "is_configured": true,
00:20:07.769        "data_offset": 0,
00:20:07.769        "data_size": 65536
00:20:07.769      }
00:20:07.769    ]
00:20:07.769  }'
00:20:07.769    23:53:38	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:07.769   23:53:38	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:07.769    23:53:38	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:07.770   23:53:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:07.770    23:53:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:07.770    23:53:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:08.028   23:53:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:08.028    "name": "raid_bdev1",
00:20:08.028    "uuid": "89355627-f38d-4ce1-88e5-0354232a3b0e",
00:20:08.028    "strip_size_kb": 0,
00:20:08.028    "state": "online",
00:20:08.028    "raid_level": "raid1",
00:20:08.028    "superblock": false,
00:20:08.028    "num_base_bdevs": 2,
00:20:08.028    "num_base_bdevs_discovered": 2,
00:20:08.028    "num_base_bdevs_operational": 2,
00:20:08.028    "base_bdevs_list": [
00:20:08.028      {
00:20:08.028        "name": "spare",
00:20:08.028        "uuid": "c3ee160c-db2b-546c-8ea5-cf10221281d3",
00:20:08.028        "is_configured": true,
00:20:08.028        "data_offset": 0,
00:20:08.028        "data_size": 65536
00:20:08.028      },
00:20:08.028      {
00:20:08.028        "name": "BaseBdev2",
00:20:08.028        "uuid": "702497be-06c2-4dd7-ac6e-c7211dea2473",
00:20:08.028        "is_configured": true,
00:20:08.028        "data_offset": 0,
00:20:08.028        "data_size": 65536
00:20:08.028      }
00:20:08.028    ]
00:20:08.028  }'
00:20:08.028   23:53:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:08.028   23:53:38	-- common/autotest_common.sh@10 -- # set +x
00:20:08.595   23:53:39	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:20:08.853  [2024-12-13 23:53:39.520664] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:08.853  [2024-12-13 23:53:39.520699] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:20:08.853  
00:20:08.853                                                                                                  Latency(us)
00:20:08.853  
[2024-12-13T23:53:39.585Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:08.853  
[2024-12-13T23:53:39.585Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:20:08.853  	 raid_bdev1          :      10.02     116.17     348.52       0.00     0.00   11472.81     279.27  108193.98
00:20:08.853  
[2024-12-13T23:53:39.585Z]  ===================================================================================================================
00:20:08.853  
[2024-12-13T23:53:39.585Z]  Total                       :                116.17     348.52       0.00     0.00   11472.81     279.27  108193.98
00:20:08.853  [2024-12-13 23:53:39.563368] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:08.853  [2024-12-13 23:53:39.563427] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:08.853  [2024-12-13 23:53:39.563502] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:08.853  [2024-12-13 23:53:39.563515] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline
00:20:08.853  0
00:20:08.853    23:53:39	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:08.853    23:53:39	-- bdev/bdev_raid.sh@671 -- # jq length
00:20:09.112   23:53:39	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:20:09.112   23:53:39	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:20:09.112   23:53:39	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@12 -- # local i
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:09.112   23:53:39	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:20:09.370  /dev/nbd0
00:20:09.370    23:53:40	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:09.370   23:53:40	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:09.370   23:53:40	-- common/autotest_common.sh@867 -- # local i
00:20:09.370   23:53:40	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:09.370   23:53:40	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:09.370   23:53:40	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:09.370   23:53:40	-- common/autotest_common.sh@871 -- # break
00:20:09.370   23:53:40	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:09.370   23:53:40	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:09.370   23:53:40	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:09.370  1+0 records in
00:20:09.370  1+0 records out
00:20:09.370  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240813 s, 17.0 MB/s
00:20:09.370    23:53:40	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:09.370   23:53:40	-- common/autotest_common.sh@884 -- # size=4096
00:20:09.370   23:53:40	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:09.370   23:53:40	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:09.370   23:53:40	-- common/autotest_common.sh@887 -- # return 0
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:09.370   23:53:40	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:20:09.370   23:53:40	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']'
00:20:09.370   23:53:40	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@12 -- # local i
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:09.370   23:53:40	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:20:09.629  /dev/nbd1
00:20:09.629    23:53:40	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:20:09.629   23:53:40	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:20:09.629   23:53:40	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:20:09.629   23:53:40	-- common/autotest_common.sh@867 -- # local i
00:20:09.629   23:53:40	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:09.629   23:53:40	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:09.629   23:53:40	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:20:09.629   23:53:40	-- common/autotest_common.sh@871 -- # break
00:20:09.629   23:53:40	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:09.629   23:53:40	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:09.629   23:53:40	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:09.629  1+0 records in
00:20:09.629  1+0 records out
00:20:09.629  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051662 s, 7.9 MB/s
00:20:09.629    23:53:40	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:09.629   23:53:40	-- common/autotest_common.sh@884 -- # size=4096
00:20:09.629   23:53:40	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:09.629   23:53:40	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:09.629   23:53:40	-- common/autotest_common.sh@887 -- # return 0
00:20:09.629   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:09.629   23:53:40	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:09.629   23:53:40	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:20:09.888   23:53:40	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@51 -- # local i
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:09.888   23:53:40	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:20:10.146    23:53:40	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@41 -- # break
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@45 -- # return 0
00:20:10.146   23:53:40	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@51 -- # local i
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:10.146   23:53:40	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:10.405    23:53:40	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@41 -- # break
00:20:10.405   23:53:40	-- bdev/nbd_common.sh@45 -- # return 0
00:20:10.405   23:53:40	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:20:10.405   23:53:40	-- bdev/bdev_raid.sh@709 -- # killprocess 123461
00:20:10.405   23:53:40	-- common/autotest_common.sh@936 -- # '[' -z 123461 ']'
00:20:10.405   23:53:40	-- common/autotest_common.sh@940 -- # kill -0 123461
00:20:10.405    23:53:40	-- common/autotest_common.sh@941 -- # uname
00:20:10.405   23:53:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:10.405    23:53:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123461
00:20:10.405   23:53:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:10.405   23:53:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:10.405   23:53:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123461'
00:20:10.405  killing process with pid 123461
00:20:10.405   23:53:40	-- common/autotest_common.sh@955 -- # kill 123461
00:20:10.405  Received shutdown signal, test time was about 11.408114 seconds
00:20:10.405  
00:20:10.405                                                                                                  Latency(us)
00:20:10.405  
[2024-12-13T23:53:41.137Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:10.405  
[2024-12-13T23:53:41.137Z]  ===================================================================================================================
00:20:10.405  
[2024-12-13T23:53:41.137Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:20:10.405  [2024-12-13 23:53:40.937287] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:20:10.405   23:53:40	-- common/autotest_common.sh@960 -- # wait 123461
00:20:10.405  [2024-12-13 23:53:41.092123] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:20:11.781  ************************************
00:20:11.781  END TEST raid_rebuild_test_io
00:20:11.781  ************************************
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@711 -- # return 0
00:20:11.781  
00:20:11.781  real	0m16.270s
00:20:11.781  user	0m25.123s
00:20:11.781  sys	0m1.627s
00:20:11.781   23:53:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:11.781   23:53:42	-- common/autotest_common.sh@10 -- # set +x
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true
00:20:11.781   23:53:42	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:11.781   23:53:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:11.781   23:53:42	-- common/autotest_common.sh@10 -- # set +x
00:20:11.781  ************************************
00:20:11.781  START TEST raid_rebuild_test_sb_io
00:20:11.781  ************************************
00:20:11.781   23:53:42	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:11.781    23:53:42	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@544 -- # raid_pid=123916
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:11.781   23:53:42	-- bdev/bdev_raid.sh@545 -- # waitforlisten 123916 /var/tmp/spdk-raid.sock
00:20:11.781   23:53:42	-- common/autotest_common.sh@829 -- # '[' -z 123916 ']'
00:20:11.781   23:53:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:11.781   23:53:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:11.781   23:53:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:11.781  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:11.781   23:53:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:11.781   23:53:42	-- common/autotest_common.sh@10 -- # set +x
00:20:11.781  [2024-12-13 23:53:42.288045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:11.781  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:11.781  Zero copy mechanism will not be used.
00:20:11.781  [2024-12-13 23:53:42.288245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123916 ]
00:20:11.781  [2024-12-13 23:53:42.454616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:12.040  [2024-12-13 23:53:42.638429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:12.299  [2024-12-13 23:53:42.822877] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:12.557   23:53:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:12.557   23:53:43	-- common/autotest_common.sh@862 -- # return 0
00:20:12.557   23:53:43	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:12.557   23:53:43	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:12.557   23:53:43	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:20:12.815  BaseBdev1_malloc
00:20:12.815   23:53:43	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:20:13.074  [2024-12-13 23:53:43.701851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:20:13.074  [2024-12-13 23:53:43.701940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:13.074  [2024-12-13 23:53:43.701974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:20:13.074  [2024-12-13 23:53:43.702025] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:13.074  [2024-12-13 23:53:43.704336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:13.074  [2024-12-13 23:53:43.704389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:20:13.074  BaseBdev1
00:20:13.074   23:53:43	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:13.074   23:53:43	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:13.074   23:53:43	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:20:13.332  BaseBdev2_malloc
00:20:13.332   23:53:43	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:20:13.590  [2024-12-13 23:53:44.122247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:20:13.590  [2024-12-13 23:53:44.122318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:13.590  [2024-12-13 23:53:44.122362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:20:13.590  [2024-12-13 23:53:44.122420] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:13.590  [2024-12-13 23:53:44.124635] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:13.590  [2024-12-13 23:53:44.124683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:20:13.590  BaseBdev2
00:20:13.590   23:53:44	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:13.849  spare_malloc
00:20:13.849   23:53:44	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:13.849  spare_delay
00:20:13.849   23:53:44	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:14.107  [2024-12-13 23:53:44.702896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:14.107  [2024-12-13 23:53:44.702963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:14.107  [2024-12-13 23:53:44.703004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:20:14.107  [2024-12-13 23:53:44.703048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:14.107  [2024-12-13 23:53:44.705256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:14.107  [2024-12-13 23:53:44.705309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:14.107  spare
00:20:14.107   23:53:44	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:20:14.365  [2024-12-13 23:53:44.878970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:14.365  [2024-12-13 23:53:44.880873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:14.365  [2024-12-13 23:53:44.881058] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:20:14.365  [2024-12-13 23:53:44.881072] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:20:14.365  [2024-12-13 23:53:44.881179] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:20:14.365  [2024-12-13 23:53:44.881520] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:20:14.365  [2024-12-13 23:53:44.881542] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:20:14.365  [2024-12-13 23:53:44.881685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:14.365   23:53:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:14.365    23:53:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:14.365    23:53:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:14.623   23:53:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:14.623    "name": "raid_bdev1",
00:20:14.623    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:14.623    "strip_size_kb": 0,
00:20:14.623    "state": "online",
00:20:14.623    "raid_level": "raid1",
00:20:14.623    "superblock": true,
00:20:14.623    "num_base_bdevs": 2,
00:20:14.623    "num_base_bdevs_discovered": 2,
00:20:14.623    "num_base_bdevs_operational": 2,
00:20:14.623    "base_bdevs_list": [
00:20:14.623      {
00:20:14.623        "name": "BaseBdev1",
00:20:14.623        "uuid": "5d172cc8-5b78-59f1-937a-a1cec12ec0f6",
00:20:14.623        "is_configured": true,
00:20:14.623        "data_offset": 2048,
00:20:14.623        "data_size": 63488
00:20:14.623      },
00:20:14.623      {
00:20:14.623        "name": "BaseBdev2",
00:20:14.623        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:14.623        "is_configured": true,
00:20:14.623        "data_offset": 2048,
00:20:14.623        "data_size": 63488
00:20:14.623      }
00:20:14.623    ]
00:20:14.623  }'
00:20:14.623   23:53:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:14.623   23:53:45	-- common/autotest_common.sh@10 -- # set +x
00:20:15.190    23:53:45	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:20:15.190    23:53:45	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:20:15.448  [2024-12-13 23:53:45.955288] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:20:15.448   23:53:45	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:20:15.448    23:53:45	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:15.448    23:53:45	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:20:15.448   23:53:46	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:20:15.448   23:53:46	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:20:15.448   23:53:46	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:20:15.448   23:53:46	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:20:15.707  [2024-12-13 23:53:46.234314] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:20:15.707  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:15.707  Zero copy mechanism will not be used.
00:20:15.707  Running I/O for 60 seconds...
00:20:15.707  [2024-12-13 23:53:46.339158] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:20:15.707  [2024-12-13 23:53:46.339391] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:15.707   23:53:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:15.707    23:53:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:15.707    23:53:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:15.965   23:53:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:15.965    "name": "raid_bdev1",
00:20:15.965    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:15.965    "strip_size_kb": 0,
00:20:15.965    "state": "online",
00:20:15.965    "raid_level": "raid1",
00:20:15.965    "superblock": true,
00:20:15.965    "num_base_bdevs": 2,
00:20:15.965    "num_base_bdevs_discovered": 1,
00:20:15.965    "num_base_bdevs_operational": 1,
00:20:15.965    "base_bdevs_list": [
00:20:15.965      {
00:20:15.965        "name": null,
00:20:15.965        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:15.965        "is_configured": false,
00:20:15.965        "data_offset": 2048,
00:20:15.965        "data_size": 63488
00:20:15.965      },
00:20:15.965      {
00:20:15.965        "name": "BaseBdev2",
00:20:15.965        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:15.965        "is_configured": true,
00:20:15.965        "data_offset": 2048,
00:20:15.965        "data_size": 63488
00:20:15.965      }
00:20:15.965    ]
00:20:15.965  }'
00:20:15.965   23:53:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:15.965   23:53:46	-- common/autotest_common.sh@10 -- # set +x
00:20:16.532   23:53:47	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:16.790  [2024-12-13 23:53:47.402915] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:16.790  [2024-12-13 23:53:47.402982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:16.790  [2024-12-13 23:53:47.443181] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:20:16.790  [2024-12-13 23:53:47.445143] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:16.790   23:53:47	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:20:17.049  [2024-12-13 23:53:47.558620] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:17.049  [2024-12-13 23:53:47.559062] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:17.049  [2024-12-13 23:53:47.782411] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:17.049  [2024-12-13 23:53:47.782609] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:17.615  [2024-12-13 23:53:48.225874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:20:17.873  [2024-12-13 23:53:48.442954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:20:17.873   23:53:48	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:17.873   23:53:48	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:17.873   23:53:48	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:17.873   23:53:48	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:17.873   23:53:48	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:17.873    23:53:48	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:17.873    23:53:48	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:18.131  [2024-12-13 23:53:48.656590] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:20:18.131   23:53:48	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:18.131    "name": "raid_bdev1",
00:20:18.131    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:18.131    "strip_size_kb": 0,
00:20:18.131    "state": "online",
00:20:18.131    "raid_level": "raid1",
00:20:18.131    "superblock": true,
00:20:18.131    "num_base_bdevs": 2,
00:20:18.131    "num_base_bdevs_discovered": 2,
00:20:18.131    "num_base_bdevs_operational": 2,
00:20:18.131    "process": {
00:20:18.131      "type": "rebuild",
00:20:18.131      "target": "spare",
00:20:18.131      "progress": {
00:20:18.131        "blocks": 16384,
00:20:18.131        "percent": 25
00:20:18.131      }
00:20:18.131    },
00:20:18.131    "base_bdevs_list": [
00:20:18.131      {
00:20:18.131        "name": "spare",
00:20:18.131        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:18.131        "is_configured": true,
00:20:18.131        "data_offset": 2048,
00:20:18.131        "data_size": 63488
00:20:18.131      },
00:20:18.131      {
00:20:18.131        "name": "BaseBdev2",
00:20:18.131        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:18.131        "is_configured": true,
00:20:18.131        "data_offset": 2048,
00:20:18.131        "data_size": 63488
00:20:18.131      }
00:20:18.131    ]
00:20:18.131  }'
00:20:18.131    23:53:48	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:18.131   23:53:48	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:18.131    23:53:48	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:18.131   23:53:48	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:18.131   23:53:48	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:20:18.390  [2024-12-13 23:53:48.966026] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:20:18.390  [2024-12-13 23:53:49.011328] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:20:18.390  [2024-12-13 23:53:49.067362] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:20:18.390  [2024-12-13 23:53:49.067537] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:20:18.390  [2024-12-13 23:53:49.068392] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:20:18.390  [2024-12-13 23:53:49.075933] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:18.390  [2024-12-13 23:53:49.119753] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:18.649   23:53:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:18.649    23:53:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:18.649    23:53:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:18.907   23:53:49	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:18.907    "name": "raid_bdev1",
00:20:18.907    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:18.907    "strip_size_kb": 0,
00:20:18.907    "state": "online",
00:20:18.907    "raid_level": "raid1",
00:20:18.907    "superblock": true,
00:20:18.907    "num_base_bdevs": 2,
00:20:18.907    "num_base_bdevs_discovered": 1,
00:20:18.907    "num_base_bdevs_operational": 1,
00:20:18.907    "base_bdevs_list": [
00:20:18.907      {
00:20:18.907        "name": null,
00:20:18.907        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:18.907        "is_configured": false,
00:20:18.907        "data_offset": 2048,
00:20:18.907        "data_size": 63488
00:20:18.907      },
00:20:18.907      {
00:20:18.907        "name": "BaseBdev2",
00:20:18.907        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:18.907        "is_configured": true,
00:20:18.907        "data_offset": 2048,
00:20:18.907        "data_size": 63488
00:20:18.907      }
00:20:18.907    ]
00:20:18.907  }'
00:20:18.907   23:53:49	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:18.907   23:53:49	-- common/autotest_common.sh@10 -- # set +x
00:20:19.474   23:53:50	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:19.474   23:53:50	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:19.474   23:53:50	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:19.474   23:53:50	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:19.474   23:53:50	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:19.474    23:53:50	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:19.474    23:53:50	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:19.735   23:53:50	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:19.735    "name": "raid_bdev1",
00:20:19.735    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:19.735    "strip_size_kb": 0,
00:20:19.735    "state": "online",
00:20:19.735    "raid_level": "raid1",
00:20:19.735    "superblock": true,
00:20:19.735    "num_base_bdevs": 2,
00:20:19.735    "num_base_bdevs_discovered": 1,
00:20:19.735    "num_base_bdevs_operational": 1,
00:20:19.735    "base_bdevs_list": [
00:20:19.735      {
00:20:19.735        "name": null,
00:20:19.735        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:19.735        "is_configured": false,
00:20:19.735        "data_offset": 2048,
00:20:19.735        "data_size": 63488
00:20:19.735      },
00:20:19.735      {
00:20:19.735        "name": "BaseBdev2",
00:20:19.735        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:19.735        "is_configured": true,
00:20:19.735        "data_offset": 2048,
00:20:19.735        "data_size": 63488
00:20:19.735      }
00:20:19.735    ]
00:20:19.735  }'
00:20:19.735    23:53:50	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:19.735   23:53:50	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:19.735    23:53:50	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:19.735   23:53:50	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:19.735   23:53:50	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:19.996  [2024-12-13 23:53:50.536139] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:19.996  [2024-12-13 23:53:50.536189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:19.996   23:53:50	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:20:19.996  [2024-12-13 23:53:50.592587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:20:19.996  [2024-12-13 23:53:50.594548] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:19.996  [2024-12-13 23:53:50.702819] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:19.996  [2024-12-13 23:53:50.703232] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:20:20.255  [2024-12-13 23:53:50.917457] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:20.255  [2024-12-13 23:53:50.917615] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:20:21.191  [2024-12-13 23:53:51.579867] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:20:21.191  [2024-12-13 23:53:51.580164] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:21.191  [2024-12-13 23:53:51.812944] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:21.191    "name": "raid_bdev1",
00:20:21.191    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:21.191    "strip_size_kb": 0,
00:20:21.191    "state": "online",
00:20:21.191    "raid_level": "raid1",
00:20:21.191    "superblock": true,
00:20:21.191    "num_base_bdevs": 2,
00:20:21.191    "num_base_bdevs_discovered": 2,
00:20:21.191    "num_base_bdevs_operational": 2,
00:20:21.191    "process": {
00:20:21.191      "type": "rebuild",
00:20:21.191      "target": "spare",
00:20:21.191      "progress": {
00:20:21.191        "blocks": 14336,
00:20:21.191        "percent": 22
00:20:21.191      }
00:20:21.191    },
00:20:21.191    "base_bdevs_list": [
00:20:21.191      {
00:20:21.191        "name": "spare",
00:20:21.191        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:21.191        "is_configured": true,
00:20:21.191        "data_offset": 2048,
00:20:21.191        "data_size": 63488
00:20:21.191      },
00:20:21.191      {
00:20:21.191        "name": "BaseBdev2",
00:20:21.191        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:21.191        "is_configured": true,
00:20:21.191        "data_offset": 2048,
00:20:21.191        "data_size": 63488
00:20:21.191      }
00:20:21.191    ]
00:20:21.191  }'
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:20:21.191  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@657 -- # local timeout=442
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:21.191   23:53:51	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:21.191    23:53:51	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:21.450   23:53:52	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:21.450    "name": "raid_bdev1",
00:20:21.450    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:21.450    "strip_size_kb": 0,
00:20:21.450    "state": "online",
00:20:21.450    "raid_level": "raid1",
00:20:21.450    "superblock": true,
00:20:21.450    "num_base_bdevs": 2,
00:20:21.450    "num_base_bdevs_discovered": 2,
00:20:21.450    "num_base_bdevs_operational": 2,
00:20:21.450    "process": {
00:20:21.450      "type": "rebuild",
00:20:21.450      "target": "spare",
00:20:21.450      "progress": {
00:20:21.450        "blocks": 20480,
00:20:21.450        "percent": 32
00:20:21.450      }
00:20:21.450    },
00:20:21.450    "base_bdevs_list": [
00:20:21.450      {
00:20:21.450        "name": "spare",
00:20:21.450        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:21.450        "is_configured": true,
00:20:21.450        "data_offset": 2048,
00:20:21.450        "data_size": 63488
00:20:21.450      },
00:20:21.450      {
00:20:21.450        "name": "BaseBdev2",
00:20:21.450        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:21.450        "is_configured": true,
00:20:21.450        "data_offset": 2048,
00:20:21.450        "data_size": 63488
00:20:21.450      }
00:20:21.450    ]
00:20:21.450  }'
00:20:21.450    23:53:52	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:21.708   23:53:52	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:21.708    23:53:52	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:21.708   23:53:52	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:21.708   23:53:52	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:21.969  [2024-12-13 23:53:52.458926] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:20:21.969  [2024-12-13 23:53:52.459182] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:20:22.564  [2024-12-13 23:53:53.110028] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008
00:20:22.564  [2024-12-13 23:53:53.212519] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:20:22.564  [2024-12-13 23:53:53.212697] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:22.564   23:53:53	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:22.564    23:53:53	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:22.564    23:53:53	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:22.822   23:53:53	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:22.822    "name": "raid_bdev1",
00:20:22.822    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:22.822    "strip_size_kb": 0,
00:20:22.822    "state": "online",
00:20:22.822    "raid_level": "raid1",
00:20:22.822    "superblock": true,
00:20:22.822    "num_base_bdevs": 2,
00:20:22.822    "num_base_bdevs_discovered": 2,
00:20:22.822    "num_base_bdevs_operational": 2,
00:20:22.822    "process": {
00:20:22.822      "type": "rebuild",
00:20:22.822      "target": "spare",
00:20:22.822      "progress": {
00:20:22.822        "blocks": 43008,
00:20:22.823        "percent": 67
00:20:22.823      }
00:20:22.823    },
00:20:22.823    "base_bdevs_list": [
00:20:22.823      {
00:20:22.823        "name": "spare",
00:20:22.823        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:22.823        "is_configured": true,
00:20:22.823        "data_offset": 2048,
00:20:22.823        "data_size": 63488
00:20:22.823      },
00:20:22.823      {
00:20:22.823        "name": "BaseBdev2",
00:20:22.823        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:22.823        "is_configured": true,
00:20:22.823        "data_offset": 2048,
00:20:22.823        "data_size": 63488
00:20:22.823      }
00:20:22.823    ]
00:20:22.823  }'
00:20:22.823    23:53:53	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:22.823   23:53:53	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:22.823    23:53:53	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:23.081   23:53:53	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:23.081   23:53:53	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:23.081  [2024-12-13 23:53:53.644466] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:20:23.339  [2024-12-13 23:53:53.884671] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296
00:20:23.598  [2024-12-13 23:53:54.294545] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:20:23.857  [2024-12-13 23:53:54.396040] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440
00:20:24.116  [2024-12-13 23:53:54.599009] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:24.116    23:53:54	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:24.116    23:53:54	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:24.116  [2024-12-13 23:53:54.635485] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:20:24.116  [2024-12-13 23:53:54.637633] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:24.116   23:53:54	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:24.116    "name": "raid_bdev1",
00:20:24.116    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:24.116    "strip_size_kb": 0,
00:20:24.116    "state": "online",
00:20:24.116    "raid_level": "raid1",
00:20:24.116    "superblock": true,
00:20:24.116    "num_base_bdevs": 2,
00:20:24.116    "num_base_bdevs_discovered": 2,
00:20:24.116    "num_base_bdevs_operational": 2,
00:20:24.116    "base_bdevs_list": [
00:20:24.116      {
00:20:24.116        "name": "spare",
00:20:24.116        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:24.116        "is_configured": true,
00:20:24.116        "data_offset": 2048,
00:20:24.116        "data_size": 63488
00:20:24.116      },
00:20:24.116      {
00:20:24.116        "name": "BaseBdev2",
00:20:24.116        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:24.116        "is_configured": true,
00:20:24.116        "data_offset": 2048,
00:20:24.116        "data_size": 63488
00:20:24.116      }
00:20:24.116    ]
00:20:24.116  }'
00:20:24.116    23:53:54	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:20:24.374    23:53:54	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@660 -- # break
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:24.374   23:53:54	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:24.374    23:53:54	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:24.374    23:53:54	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:24.633    "name": "raid_bdev1",
00:20:24.633    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:24.633    "strip_size_kb": 0,
00:20:24.633    "state": "online",
00:20:24.633    "raid_level": "raid1",
00:20:24.633    "superblock": true,
00:20:24.633    "num_base_bdevs": 2,
00:20:24.633    "num_base_bdevs_discovered": 2,
00:20:24.633    "num_base_bdevs_operational": 2,
00:20:24.633    "base_bdevs_list": [
00:20:24.633      {
00:20:24.633        "name": "spare",
00:20:24.633        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:24.633        "is_configured": true,
00:20:24.633        "data_offset": 2048,
00:20:24.633        "data_size": 63488
00:20:24.633      },
00:20:24.633      {
00:20:24.633        "name": "BaseBdev2",
00:20:24.633        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:24.633        "is_configured": true,
00:20:24.633        "data_offset": 2048,
00:20:24.633        "data_size": 63488
00:20:24.633      }
00:20:24.633    ]
00:20:24.633  }'
00:20:24.633    23:53:55	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:24.633    23:53:55	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:24.633   23:53:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:24.633    23:53:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:24.633    23:53:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:24.893   23:53:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:24.893    "name": "raid_bdev1",
00:20:24.893    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:24.893    "strip_size_kb": 0,
00:20:24.893    "state": "online",
00:20:24.893    "raid_level": "raid1",
00:20:24.893    "superblock": true,
00:20:24.893    "num_base_bdevs": 2,
00:20:24.893    "num_base_bdevs_discovered": 2,
00:20:24.893    "num_base_bdevs_operational": 2,
00:20:24.893    "base_bdevs_list": [
00:20:24.893      {
00:20:24.893        "name": "spare",
00:20:24.893        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:24.893        "is_configured": true,
00:20:24.893        "data_offset": 2048,
00:20:24.893        "data_size": 63488
00:20:24.893      },
00:20:24.893      {
00:20:24.893        "name": "BaseBdev2",
00:20:24.893        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:24.893        "is_configured": true,
00:20:24.893        "data_offset": 2048,
00:20:24.893        "data_size": 63488
00:20:24.893      }
00:20:24.893    ]
00:20:24.893  }'
00:20:24.893   23:53:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:24.893   23:53:55	-- common/autotest_common.sh@10 -- # set +x
00:20:25.462   23:53:56	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:20:25.720  [2024-12-13 23:53:56.345015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:25.720  [2024-12-13 23:53:56.345049] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:20:25.720  
00:20:25.720                                                                                                  Latency(us)
00:20:25.720  
[2024-12-13T23:53:56.452Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:25.720  
[2024-12-13T23:53:56.452Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:20:25.720  	 raid_bdev1          :      10.12     126.18     378.53       0.00     0.00   10568.74     279.27  108193.98
00:20:25.720  
[2024-12-13T23:53:56.452Z]  ===================================================================================================================
00:20:25.720  
[2024-12-13T23:53:56.452Z]  Total                       :                126.18     378.53       0.00     0.00   10568.74     279.27  108193.98
00:20:25.720  [2024-12-13 23:53:56.371765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:25.720  [2024-12-13 23:53:56.371805] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:25.720  [2024-12-13 23:53:56.371885] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:25.720  [2024-12-13 23:53:56.371896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:20:25.720  0
00:20:25.720    23:53:56	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:25.720    23:53:56	-- bdev/bdev_raid.sh@671 -- # jq length
00:20:25.979   23:53:56	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:20:25.979   23:53:56	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:20:25.979   23:53:56	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@12 -- # local i
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:25.979   23:53:56	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:20:26.238  /dev/nbd0
00:20:26.238    23:53:56	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:26.238   23:53:56	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:26.238   23:53:56	-- common/autotest_common.sh@867 -- # local i
00:20:26.238   23:53:56	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:26.238   23:53:56	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:26.238   23:53:56	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:26.238   23:53:56	-- common/autotest_common.sh@871 -- # break
00:20:26.238   23:53:56	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:26.238   23:53:56	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:26.238   23:53:56	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:26.238  1+0 records in
00:20:26.238  1+0 records out
00:20:26.238  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004115 s, 10.0 MB/s
00:20:26.238    23:53:56	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:26.238   23:53:56	-- common/autotest_common.sh@884 -- # size=4096
00:20:26.238   23:53:56	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:26.238   23:53:56	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:26.238   23:53:56	-- common/autotest_common.sh@887 -- # return 0
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:26.238   23:53:56	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:20:26.238   23:53:56	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']'
00:20:26.238   23:53:56	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@12 -- # local i
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:26.238   23:53:56	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:20:26.497  /dev/nbd1
00:20:26.497    23:53:57	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:20:26.497   23:53:57	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:20:26.497   23:53:57	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:20:26.497   23:53:57	-- common/autotest_common.sh@867 -- # local i
00:20:26.497   23:53:57	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:26.497   23:53:57	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:26.497   23:53:57	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:20:26.497   23:53:57	-- common/autotest_common.sh@871 -- # break
00:20:26.497   23:53:57	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:26.497   23:53:57	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:26.497   23:53:57	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:26.497  1+0 records in
00:20:26.497  1+0 records out
00:20:26.497  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557938 s, 7.3 MB/s
00:20:26.497    23:53:57	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:26.497   23:53:57	-- common/autotest_common.sh@884 -- # size=4096
00:20:26.497   23:53:57	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:26.497   23:53:57	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:26.497   23:53:57	-- common/autotest_common.sh@887 -- # return 0
00:20:26.497   23:53:57	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:26.497   23:53:57	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:26.497   23:53:57	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:20:26.756   23:53:57	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@51 -- # local i
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:26.756   23:53:57	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:20:27.014    23:53:57	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@41 -- # break
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@45 -- # return 0
00:20:27.014   23:53:57	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@51 -- # local i
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:27.014   23:53:57	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:27.273    23:53:57	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@41 -- # break
00:20:27.273   23:53:57	-- bdev/nbd_common.sh@45 -- # return 0
00:20:27.273   23:53:57	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:20:27.273   23:53:57	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:27.273   23:53:57	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:20:27.273   23:53:57	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:20:27.273   23:53:57	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:20:27.531  [2024-12-13 23:53:58.202465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:20:27.531  [2024-12-13 23:53:58.202553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:27.531  [2024-12-13 23:53:58.202589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:20:27.531  [2024-12-13 23:53:58.202619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:27.531  [2024-12-13 23:53:58.204927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:27.531  [2024-12-13 23:53:58.204996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:20:27.531  [2024-12-13 23:53:58.205095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:20:27.531  [2024-12-13 23:53:58.205155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:27.531  BaseBdev1
00:20:27.531   23:53:58	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:27.531   23:53:58	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:20:27.531   23:53:58	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:20:27.790   23:53:58	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:20:28.049  [2024-12-13 23:53:58.630560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:20:28.049  [2024-12-13 23:53:58.630620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:28.049  [2024-12-13 23:53:58.630653] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:20:28.049  [2024-12-13 23:53:58.630679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:28.049  [2024-12-13 23:53:58.631071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:28.049  [2024-12-13 23:53:58.631122] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:20:28.049  [2024-12-13 23:53:58.631212] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:20:28.049  [2024-12-13 23:53:58.631225] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:20:28.049  [2024-12-13 23:53:58.631232] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:28.049  [2024-12-13 23:53:58.631248] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring
00:20:28.049  [2024-12-13 23:53:58.631307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:28.049  BaseBdev2
00:20:28.049   23:53:58	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:20:28.308   23:53:58	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:28.567  [2024-12-13 23:53:59.050669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:28.567  [2024-12-13 23:53:59.050723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:28.567  [2024-12-13 23:53:59.050758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:20:28.567  [2024-12-13 23:53:59.050778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:28.567  [2024-12-13 23:53:59.051181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:28.567  [2024-12-13 23:53:59.051230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:28.567  [2024-12-13 23:53:59.051327] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:20:28.567  [2024-12-13 23:53:59.051348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:28.567  spare
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:28.567    23:53:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:28.567    23:53:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:28.567  [2024-12-13 23:53:59.151480] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80
00:20:28.567  [2024-12-13 23:53:59.151500] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:20:28.567  [2024-12-13 23:53:59.151596] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30
00:20:28.567  [2024-12-13 23:53:59.151940] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80
00:20:28.567  [2024-12-13 23:53:59.151953] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80
00:20:28.567  [2024-12-13 23:53:59.152064] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:28.567    "name": "raid_bdev1",
00:20:28.567    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:28.567    "strip_size_kb": 0,
00:20:28.567    "state": "online",
00:20:28.567    "raid_level": "raid1",
00:20:28.567    "superblock": true,
00:20:28.567    "num_base_bdevs": 2,
00:20:28.567    "num_base_bdevs_discovered": 2,
00:20:28.567    "num_base_bdevs_operational": 2,
00:20:28.567    "base_bdevs_list": [
00:20:28.567      {
00:20:28.567        "name": "spare",
00:20:28.567        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:28.567        "is_configured": true,
00:20:28.567        "data_offset": 2048,
00:20:28.567        "data_size": 63488
00:20:28.567      },
00:20:28.567      {
00:20:28.567        "name": "BaseBdev2",
00:20:28.567        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:28.567        "is_configured": true,
00:20:28.567        "data_offset": 2048,
00:20:28.567        "data_size": 63488
00:20:28.567      }
00:20:28.567    ]
00:20:28.567  }'
00:20:28.567   23:53:59	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:28.567   23:53:59	-- common/autotest_common.sh@10 -- # set +x
00:20:29.134   23:53:59	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:29.134   23:53:59	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:29.134   23:53:59	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:29.134   23:53:59	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:29.134   23:53:59	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:29.134    23:53:59	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:29.134    23:53:59	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:29.392   23:54:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:29.392    "name": "raid_bdev1",
00:20:29.392    "uuid": "05bf41a8-3ee4-4e25-bbde-b7b3d3096659",
00:20:29.392    "strip_size_kb": 0,
00:20:29.392    "state": "online",
00:20:29.392    "raid_level": "raid1",
00:20:29.392    "superblock": true,
00:20:29.392    "num_base_bdevs": 2,
00:20:29.392    "num_base_bdevs_discovered": 2,
00:20:29.392    "num_base_bdevs_operational": 2,
00:20:29.392    "base_bdevs_list": [
00:20:29.392      {
00:20:29.392        "name": "spare",
00:20:29.392        "uuid": "edf20eb6-d644-5d16-8400-e51ad0303aab",
00:20:29.392        "is_configured": true,
00:20:29.392        "data_offset": 2048,
00:20:29.392        "data_size": 63488
00:20:29.392      },
00:20:29.392      {
00:20:29.392        "name": "BaseBdev2",
00:20:29.392        "uuid": "2296fba9-da6b-55b9-8701-40461e925553",
00:20:29.392        "is_configured": true,
00:20:29.392        "data_offset": 2048,
00:20:29.392        "data_size": 63488
00:20:29.392      }
00:20:29.392    ]
00:20:29.392  }'
00:20:29.392    23:54:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:29.392   23:54:00	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:29.392    23:54:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:29.650   23:54:00	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:29.650    23:54:00	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:29.650    23:54:00	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:20:29.650   23:54:00	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:20:29.650   23:54:00	-- bdev/bdev_raid.sh@709 -- # killprocess 123916
00:20:29.650   23:54:00	-- common/autotest_common.sh@936 -- # '[' -z 123916 ']'
00:20:29.650   23:54:00	-- common/autotest_common.sh@940 -- # kill -0 123916
00:20:29.650    23:54:00	-- common/autotest_common.sh@941 -- # uname
00:20:29.910   23:54:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:29.910    23:54:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123916
00:20:29.910  killing process with pid 123916
00:20:29.910  Received shutdown signal, test time was about 14.168789 seconds
00:20:29.910  
00:20:29.910                                                                                                  Latency(us)
00:20:29.910  
[2024-12-13T23:54:00.642Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:29.910  
[2024-12-13T23:54:00.642Z]  ===================================================================================================================
00:20:29.910  
[2024-12-13T23:54:00.642Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:20:29.910   23:54:00	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:29.910   23:54:00	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:29.910   23:54:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123916'
00:20:29.910   23:54:00	-- common/autotest_common.sh@955 -- # kill 123916
00:20:29.910   23:54:00	-- common/autotest_common.sh@960 -- # wait 123916
00:20:29.910  [2024-12-13 23:54:00.404856] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:20:29.910  [2024-12-13 23:54:00.404912] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:29.910  [2024-12-13 23:54:00.404972] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:29.910  [2024-12-13 23:54:00.404985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline
00:20:29.910  [2024-12-13 23:54:00.562452] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:20:31.286  ************************************
00:20:31.286  END TEST raid_rebuild_test_sb_io
00:20:31.286  ************************************
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@711 -- # return 0
00:20:31.286  
00:20:31.286  real	0m19.404s
00:20:31.286  user	0m31.063s
00:20:31.286  sys	0m2.205s
00:20:31.286   23:54:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:31.286   23:54:01	-- common/autotest_common.sh@10 -- # set +x
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@734 -- # for n in 2 4
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false
00:20:31.286   23:54:01	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:31.286   23:54:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:31.286   23:54:01	-- common/autotest_common.sh@10 -- # set +x
00:20:31.286  ************************************
00:20:31.286  START TEST raid_rebuild_test
00:20:31.286  ************************************
00:20:31.286   23:54:01	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:31.286    23:54:01	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@544 -- # raid_pid=124451
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@545 -- # waitforlisten 124451 /var/tmp/spdk-raid.sock
00:20:31.286   23:54:01	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:31.286   23:54:01	-- common/autotest_common.sh@829 -- # '[' -z 124451 ']'
00:20:31.286   23:54:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:31.286   23:54:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:31.286  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:31.286   23:54:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:31.286   23:54:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:31.287   23:54:01	-- common/autotest_common.sh@10 -- # set +x
00:20:31.287  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:31.287  Zero copy mechanism will not be used.
00:20:31.287  [2024-12-13 23:54:01.743780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:31.287  [2024-12-13 23:54:01.743923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124451 ]
00:20:31.287  [2024-12-13 23:54:01.899776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:31.545  [2024-12-13 23:54:02.086836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:31.545  [2024-12-13 23:54:02.273431] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:32.112   23:54:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:32.112   23:54:02	-- common/autotest_common.sh@862 -- # return 0
00:20:32.112   23:54:02	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:32.112   23:54:02	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:32.112   23:54:02	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:20:32.371  BaseBdev1
00:20:32.371   23:54:02	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:32.371   23:54:02	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:32.371   23:54:02	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:20:32.629  BaseBdev2
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:20:32.629  BaseBdev3
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:32.629   23:54:03	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:20:32.887  BaseBdev4
00:20:32.887   23:54:03	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:33.145  spare_malloc
00:20:33.145   23:54:03	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:33.403  spare_delay
00:20:33.403   23:54:04	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:33.661  [2024-12-13 23:54:04.185113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:33.661  [2024-12-13 23:54:04.185191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:33.661  [2024-12-13 23:54:04.185225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:20:33.661  [2024-12-13 23:54:04.185274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:33.661  [2024-12-13 23:54:04.187541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:33.661  [2024-12-13 23:54:04.187586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:33.661  spare
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:20:33.661  [2024-12-13 23:54:04.369163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:33.661  [2024-12-13 23:54:04.371204] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:33.661  [2024-12-13 23:54:04.371379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:33.661  [2024-12-13 23:54:04.371471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:33.661  [2024-12-13 23:54:04.371660] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:20:33.661  [2024-12-13 23:54:04.371703] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:20:33.661  [2024-12-13 23:54:04.371967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:20:33.661  [2024-12-13 23:54:04.372473] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:20:33.661  [2024-12-13 23:54:04.372603] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:20:33.661  [2024-12-13 23:54:04.372852] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:33.661   23:54:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:20:33.662   23:54:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:33.662   23:54:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:33.662   23:54:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:33.662   23:54:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:33.662    23:54:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:33.662    23:54:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:33.920   23:54:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:33.920    "name": "raid_bdev1",
00:20:33.920    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:33.920    "strip_size_kb": 0,
00:20:33.920    "state": "online",
00:20:33.920    "raid_level": "raid1",
00:20:33.920    "superblock": false,
00:20:33.920    "num_base_bdevs": 4,
00:20:33.920    "num_base_bdevs_discovered": 4,
00:20:33.920    "num_base_bdevs_operational": 4,
00:20:33.920    "base_bdevs_list": [
00:20:33.920      {
00:20:33.920        "name": "BaseBdev1",
00:20:33.920        "uuid": "af184535-90b3-47ab-89bc-424fa3554aed",
00:20:33.920        "is_configured": true,
00:20:33.920        "data_offset": 0,
00:20:33.920        "data_size": 65536
00:20:33.920      },
00:20:33.920      {
00:20:33.920        "name": "BaseBdev2",
00:20:33.920        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:33.920        "is_configured": true,
00:20:33.920        "data_offset": 0,
00:20:33.920        "data_size": 65536
00:20:33.920      },
00:20:33.920      {
00:20:33.920        "name": "BaseBdev3",
00:20:33.920        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:33.920        "is_configured": true,
00:20:33.920        "data_offset": 0,
00:20:33.920        "data_size": 65536
00:20:33.920      },
00:20:33.920      {
00:20:33.920        "name": "BaseBdev4",
00:20:33.920        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:33.920        "is_configured": true,
00:20:33.920        "data_offset": 0,
00:20:33.920        "data_size": 65536
00:20:33.920      }
00:20:33.921    ]
00:20:33.921  }'
00:20:33.921   23:54:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:33.921   23:54:04	-- common/autotest_common.sh@10 -- # set +x
00:20:34.856    23:54:05	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:20:34.856    23:54:05	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:20:34.856  [2024-12-13 23:54:05.465517] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:20:34.856   23:54:05	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:20:34.856    23:54:05	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:34.856    23:54:05	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:20:35.115   23:54:05	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:20:35.115   23:54:05	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:20:35.115   23:54:05	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:20:35.115   23:54:05	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@12 -- # local i
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:35.115   23:54:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:20:35.373  [2024-12-13 23:54:05.969453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:20:35.373  /dev/nbd0
00:20:35.373    23:54:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:35.373   23:54:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:35.373   23:54:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:35.373   23:54:06	-- common/autotest_common.sh@867 -- # local i
00:20:35.373   23:54:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:35.373   23:54:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:35.373   23:54:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:35.373   23:54:06	-- common/autotest_common.sh@871 -- # break
00:20:35.373   23:54:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:35.373   23:54:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:35.373   23:54:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:35.373  1+0 records in
00:20:35.373  1+0 records out
00:20:35.373  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632072 s, 6.5 MB/s
00:20:35.373    23:54:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:35.373   23:54:06	-- common/autotest_common.sh@884 -- # size=4096
00:20:35.373   23:54:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:35.373   23:54:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:35.373   23:54:06	-- common/autotest_common.sh@887 -- # return 0
00:20:35.373   23:54:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:35.373   23:54:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:35.373   23:54:06	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:20:35.373   23:54:06	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:20:35.373   23:54:06	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:20:41.933  65536+0 records in
00:20:41.933  65536+0 records out
00:20:41.933  33554432 bytes (34 MB, 32 MiB) copied, 5.46785 s, 6.1 MB/s
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@51 -- # local i
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:41.933    23:54:11	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:41.933  [2024-12-13 23:54:11.756588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@41 -- # break
00:20:41.933   23:54:11	-- bdev/nbd_common.sh@45 -- # return 0
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:20:41.933  [2024-12-13 23:54:11.924341] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:41.933   23:54:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:41.933    23:54:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:41.933    23:54:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:41.933   23:54:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:41.933    "name": "raid_bdev1",
00:20:41.933    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:41.933    "strip_size_kb": 0,
00:20:41.933    "state": "online",
00:20:41.933    "raid_level": "raid1",
00:20:41.933    "superblock": false,
00:20:41.933    "num_base_bdevs": 4,
00:20:41.933    "num_base_bdevs_discovered": 3,
00:20:41.933    "num_base_bdevs_operational": 3,
00:20:41.933    "base_bdevs_list": [
00:20:41.933      {
00:20:41.933        "name": null,
00:20:41.933        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:41.933        "is_configured": false,
00:20:41.933        "data_offset": 0,
00:20:41.933        "data_size": 65536
00:20:41.933      },
00:20:41.933      {
00:20:41.933        "name": "BaseBdev2",
00:20:41.933        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:41.933        "is_configured": true,
00:20:41.933        "data_offset": 0,
00:20:41.933        "data_size": 65536
00:20:41.933      },
00:20:41.933      {
00:20:41.933        "name": "BaseBdev3",
00:20:41.933        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:41.933        "is_configured": true,
00:20:41.933        "data_offset": 0,
00:20:41.933        "data_size": 65536
00:20:41.933      },
00:20:41.933      {
00:20:41.933        "name": "BaseBdev4",
00:20:41.933        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:41.933        "is_configured": true,
00:20:41.933        "data_offset": 0,
00:20:41.933        "data_size": 65536
00:20:41.933      }
00:20:41.933    ]
00:20:41.933  }'
00:20:41.933   23:54:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:41.933   23:54:12	-- common/autotest_common.sh@10 -- # set +x
00:20:42.192   23:54:12	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:42.451  [2024-12-13 23:54:13.052517] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:42.451  [2024-12-13 23:54:13.052676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:42.451  [2024-12-13 23:54:13.063203] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0
00:20:42.451  [2024-12-13 23:54:13.065234] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:42.451   23:54:13	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:20:43.386   23:54:14	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:43.386   23:54:14	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:43.386   23:54:14	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:43.386   23:54:14	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:43.386   23:54:14	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:43.386    23:54:14	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:43.386    23:54:14	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:43.645   23:54:14	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:43.645    "name": "raid_bdev1",
00:20:43.645    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:43.645    "strip_size_kb": 0,
00:20:43.645    "state": "online",
00:20:43.645    "raid_level": "raid1",
00:20:43.645    "superblock": false,
00:20:43.645    "num_base_bdevs": 4,
00:20:43.645    "num_base_bdevs_discovered": 4,
00:20:43.645    "num_base_bdevs_operational": 4,
00:20:43.645    "process": {
00:20:43.645      "type": "rebuild",
00:20:43.645      "target": "spare",
00:20:43.645      "progress": {
00:20:43.645        "blocks": 24576,
00:20:43.645        "percent": 37
00:20:43.645      }
00:20:43.645    },
00:20:43.645    "base_bdevs_list": [
00:20:43.645      {
00:20:43.645        "name": "spare",
00:20:43.645        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:43.645        "is_configured": true,
00:20:43.645        "data_offset": 0,
00:20:43.645        "data_size": 65536
00:20:43.645      },
00:20:43.645      {
00:20:43.645        "name": "BaseBdev2",
00:20:43.645        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:43.645        "is_configured": true,
00:20:43.645        "data_offset": 0,
00:20:43.645        "data_size": 65536
00:20:43.645      },
00:20:43.645      {
00:20:43.645        "name": "BaseBdev3",
00:20:43.645        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:43.645        "is_configured": true,
00:20:43.645        "data_offset": 0,
00:20:43.645        "data_size": 65536
00:20:43.645      },
00:20:43.645      {
00:20:43.645        "name": "BaseBdev4",
00:20:43.645        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:43.645        "is_configured": true,
00:20:43.645        "data_offset": 0,
00:20:43.645        "data_size": 65536
00:20:43.645      }
00:20:43.645    ]
00:20:43.645  }'
00:20:43.645    23:54:14	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:43.645   23:54:14	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:43.904    23:54:14	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:43.904   23:54:14	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:43.904   23:54:14	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:20:43.904  [2024-12-13 23:54:14.597708] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:20:44.163  [2024-12-13 23:54:14.673727] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:20:44.163  [2024-12-13 23:54:14.673978] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:44.163   23:54:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:44.163    23:54:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:44.163    23:54:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:44.422   23:54:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:44.422    "name": "raid_bdev1",
00:20:44.422    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:44.422    "strip_size_kb": 0,
00:20:44.422    "state": "online",
00:20:44.422    "raid_level": "raid1",
00:20:44.422    "superblock": false,
00:20:44.422    "num_base_bdevs": 4,
00:20:44.422    "num_base_bdevs_discovered": 3,
00:20:44.422    "num_base_bdevs_operational": 3,
00:20:44.422    "base_bdevs_list": [
00:20:44.422      {
00:20:44.422        "name": null,
00:20:44.422        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:44.422        "is_configured": false,
00:20:44.422        "data_offset": 0,
00:20:44.422        "data_size": 65536
00:20:44.422      },
00:20:44.422      {
00:20:44.422        "name": "BaseBdev2",
00:20:44.422        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:44.422        "is_configured": true,
00:20:44.422        "data_offset": 0,
00:20:44.422        "data_size": 65536
00:20:44.422      },
00:20:44.422      {
00:20:44.422        "name": "BaseBdev3",
00:20:44.422        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:44.422        "is_configured": true,
00:20:44.422        "data_offset": 0,
00:20:44.422        "data_size": 65536
00:20:44.422      },
00:20:44.422      {
00:20:44.422        "name": "BaseBdev4",
00:20:44.422        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:44.422        "is_configured": true,
00:20:44.422        "data_offset": 0,
00:20:44.422        "data_size": 65536
00:20:44.422      }
00:20:44.422    ]
00:20:44.422  }'
00:20:44.422   23:54:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:44.422   23:54:14	-- common/autotest_common.sh@10 -- # set +x
00:20:44.988   23:54:15	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:44.988   23:54:15	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:44.988   23:54:15	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:44.988   23:54:15	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:44.988   23:54:15	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:44.988    23:54:15	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:44.988    23:54:15	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:45.246   23:54:15	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:45.247    "name": "raid_bdev1",
00:20:45.247    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:45.247    "strip_size_kb": 0,
00:20:45.247    "state": "online",
00:20:45.247    "raid_level": "raid1",
00:20:45.247    "superblock": false,
00:20:45.247    "num_base_bdevs": 4,
00:20:45.247    "num_base_bdevs_discovered": 3,
00:20:45.247    "num_base_bdevs_operational": 3,
00:20:45.247    "base_bdevs_list": [
00:20:45.247      {
00:20:45.247        "name": null,
00:20:45.247        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:45.247        "is_configured": false,
00:20:45.247        "data_offset": 0,
00:20:45.247        "data_size": 65536
00:20:45.247      },
00:20:45.247      {
00:20:45.247        "name": "BaseBdev2",
00:20:45.247        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:45.247        "is_configured": true,
00:20:45.247        "data_offset": 0,
00:20:45.247        "data_size": 65536
00:20:45.247      },
00:20:45.247      {
00:20:45.247        "name": "BaseBdev3",
00:20:45.247        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:45.247        "is_configured": true,
00:20:45.247        "data_offset": 0,
00:20:45.247        "data_size": 65536
00:20:45.247      },
00:20:45.247      {
00:20:45.247        "name": "BaseBdev4",
00:20:45.247        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:45.247        "is_configured": true,
00:20:45.247        "data_offset": 0,
00:20:45.247        "data_size": 65536
00:20:45.247      }
00:20:45.247    ]
00:20:45.247  }'
00:20:45.247    23:54:15	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:45.247   23:54:15	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:45.247    23:54:15	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:45.247   23:54:15	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:45.247   23:54:15	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:45.506  [2024-12-13 23:54:16.092579] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:45.506  [2024-12-13 23:54:16.092753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:45.506  [2024-12-13 23:54:16.102689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890
00:20:45.506  [2024-12-13 23:54:16.104729] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:45.506   23:54:16	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:20:46.442   23:54:17	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:46.442   23:54:17	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:46.442   23:54:17	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:46.442   23:54:17	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:46.442   23:54:17	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:46.442    23:54:17	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:46.442    23:54:17	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:46.700   23:54:17	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:46.700    "name": "raid_bdev1",
00:20:46.700    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:46.700    "strip_size_kb": 0,
00:20:46.700    "state": "online",
00:20:46.700    "raid_level": "raid1",
00:20:46.700    "superblock": false,
00:20:46.700    "num_base_bdevs": 4,
00:20:46.700    "num_base_bdevs_discovered": 4,
00:20:46.700    "num_base_bdevs_operational": 4,
00:20:46.700    "process": {
00:20:46.700      "type": "rebuild",
00:20:46.700      "target": "spare",
00:20:46.700      "progress": {
00:20:46.700        "blocks": 24576,
00:20:46.700        "percent": 37
00:20:46.700      }
00:20:46.700    },
00:20:46.700    "base_bdevs_list": [
00:20:46.700      {
00:20:46.700        "name": "spare",
00:20:46.700        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:46.700        "is_configured": true,
00:20:46.700        "data_offset": 0,
00:20:46.700        "data_size": 65536
00:20:46.700      },
00:20:46.700      {
00:20:46.700        "name": "BaseBdev2",
00:20:46.701        "uuid": "7e7ba814-5095-4f1f-a363-a80ecedfb701",
00:20:46.701        "is_configured": true,
00:20:46.701        "data_offset": 0,
00:20:46.701        "data_size": 65536
00:20:46.701      },
00:20:46.701      {
00:20:46.701        "name": "BaseBdev3",
00:20:46.701        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:46.701        "is_configured": true,
00:20:46.701        "data_offset": 0,
00:20:46.701        "data_size": 65536
00:20:46.701      },
00:20:46.701      {
00:20:46.701        "name": "BaseBdev4",
00:20:46.701        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:46.701        "is_configured": true,
00:20:46.701        "data_offset": 0,
00:20:46.701        "data_size": 65536
00:20:46.701      }
00:20:46.701    ]
00:20:46.701  }'
00:20:46.701    23:54:17	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:46.701   23:54:17	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:46.701    23:54:17	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:20:46.959   23:54:17	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:20:46.959  [2024-12-13 23:54:17.687247] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:20:47.218  [2024-12-13 23:54:17.713434] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:47.218    23:54:17	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:47.218    23:54:17	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:47.218   23:54:17	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:47.218    "name": "raid_bdev1",
00:20:47.218    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:47.218    "strip_size_kb": 0,
00:20:47.218    "state": "online",
00:20:47.218    "raid_level": "raid1",
00:20:47.218    "superblock": false,
00:20:47.218    "num_base_bdevs": 4,
00:20:47.218    "num_base_bdevs_discovered": 3,
00:20:47.218    "num_base_bdevs_operational": 3,
00:20:47.218    "process": {
00:20:47.218      "type": "rebuild",
00:20:47.218      "target": "spare",
00:20:47.218      "progress": {
00:20:47.218        "blocks": 34816,
00:20:47.218        "percent": 53
00:20:47.218      }
00:20:47.218    },
00:20:47.218    "base_bdevs_list": [
00:20:47.218      {
00:20:47.218        "name": "spare",
00:20:47.218        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:47.218        "is_configured": true,
00:20:47.218        "data_offset": 0,
00:20:47.218        "data_size": 65536
00:20:47.218      },
00:20:47.218      {
00:20:47.218        "name": null,
00:20:47.218        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:47.218        "is_configured": false,
00:20:47.218        "data_offset": 0,
00:20:47.218        "data_size": 65536
00:20:47.218      },
00:20:47.218      {
00:20:47.218        "name": "BaseBdev3",
00:20:47.218        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:47.218        "is_configured": true,
00:20:47.218        "data_offset": 0,
00:20:47.218        "data_size": 65536
00:20:47.218      },
00:20:47.218      {
00:20:47.218        "name": "BaseBdev4",
00:20:47.218        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:47.218        "is_configured": true,
00:20:47.218        "data_offset": 0,
00:20:47.218        "data_size": 65536
00:20:47.218      }
00:20:47.218    ]
00:20:47.218  }'
00:20:47.218    23:54:17	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:47.477   23:54:17	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:47.477    23:54:17	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@657 -- # local timeout=469
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:47.477   23:54:18	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:47.477    23:54:18	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:47.477    23:54:18	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:47.748   23:54:18	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:47.748    "name": "raid_bdev1",
00:20:47.748    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:47.748    "strip_size_kb": 0,
00:20:47.748    "state": "online",
00:20:47.748    "raid_level": "raid1",
00:20:47.748    "superblock": false,
00:20:47.748    "num_base_bdevs": 4,
00:20:47.748    "num_base_bdevs_discovered": 3,
00:20:47.748    "num_base_bdevs_operational": 3,
00:20:47.748    "process": {
00:20:47.748      "type": "rebuild",
00:20:47.748      "target": "spare",
00:20:47.748      "progress": {
00:20:47.748        "blocks": 43008,
00:20:47.748        "percent": 65
00:20:47.748      }
00:20:47.748    },
00:20:47.748    "base_bdevs_list": [
00:20:47.748      {
00:20:47.748        "name": "spare",
00:20:47.748        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:47.748        "is_configured": true,
00:20:47.748        "data_offset": 0,
00:20:47.748        "data_size": 65536
00:20:47.748      },
00:20:47.748      {
00:20:47.748        "name": null,
00:20:47.748        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:47.748        "is_configured": false,
00:20:47.748        "data_offset": 0,
00:20:47.748        "data_size": 65536
00:20:47.748      },
00:20:47.748      {
00:20:47.748        "name": "BaseBdev3",
00:20:47.748        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:47.748        "is_configured": true,
00:20:47.748        "data_offset": 0,
00:20:47.748        "data_size": 65536
00:20:47.748      },
00:20:47.748      {
00:20:47.748        "name": "BaseBdev4",
00:20:47.748        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:47.748        "is_configured": true,
00:20:47.748        "data_offset": 0,
00:20:47.748        "data_size": 65536
00:20:47.748      }
00:20:47.748    ]
00:20:47.748  }'
00:20:47.748    23:54:18	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:47.748   23:54:18	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:47.748    23:54:18	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:47.748   23:54:18	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:47.748   23:54:18	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:48.718  [2024-12-13 23:54:19.324197] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:20:48.718  [2024-12-13 23:54:19.324424] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:20:48.718  [2024-12-13 23:54:19.324624] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:48.718   23:54:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:48.718    23:54:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:48.718    23:54:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:48.976   23:54:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:48.977    "name": "raid_bdev1",
00:20:48.977    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:48.977    "strip_size_kb": 0,
00:20:48.977    "state": "online",
00:20:48.977    "raid_level": "raid1",
00:20:48.977    "superblock": false,
00:20:48.977    "num_base_bdevs": 4,
00:20:48.977    "num_base_bdevs_discovered": 3,
00:20:48.977    "num_base_bdevs_operational": 3,
00:20:48.977    "base_bdevs_list": [
00:20:48.977      {
00:20:48.977        "name": "spare",
00:20:48.977        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:48.977        "is_configured": true,
00:20:48.977        "data_offset": 0,
00:20:48.977        "data_size": 65536
00:20:48.977      },
00:20:48.977      {
00:20:48.977        "name": null,
00:20:48.977        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:48.977        "is_configured": false,
00:20:48.977        "data_offset": 0,
00:20:48.977        "data_size": 65536
00:20:48.977      },
00:20:48.977      {
00:20:48.977        "name": "BaseBdev3",
00:20:48.977        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:48.977        "is_configured": true,
00:20:48.977        "data_offset": 0,
00:20:48.977        "data_size": 65536
00:20:48.977      },
00:20:48.977      {
00:20:48.977        "name": "BaseBdev4",
00:20:48.977        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:48.977        "is_configured": true,
00:20:48.977        "data_offset": 0,
00:20:48.977        "data_size": 65536
00:20:48.977      }
00:20:48.977    ]
00:20:48.977  }'
00:20:48.977    23:54:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:48.977   23:54:19	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:20:48.977    23:54:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@660 -- # break
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:49.236   23:54:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:49.236    23:54:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:49.236    23:54:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:49.494   23:54:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:49.494    "name": "raid_bdev1",
00:20:49.494    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:49.494    "strip_size_kb": 0,
00:20:49.494    "state": "online",
00:20:49.494    "raid_level": "raid1",
00:20:49.494    "superblock": false,
00:20:49.494    "num_base_bdevs": 4,
00:20:49.494    "num_base_bdevs_discovered": 3,
00:20:49.494    "num_base_bdevs_operational": 3,
00:20:49.494    "base_bdevs_list": [
00:20:49.494      {
00:20:49.494        "name": "spare",
00:20:49.494        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:49.494        "is_configured": true,
00:20:49.494        "data_offset": 0,
00:20:49.494        "data_size": 65536
00:20:49.494      },
00:20:49.494      {
00:20:49.494        "name": null,
00:20:49.494        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:49.494        "is_configured": false,
00:20:49.494        "data_offset": 0,
00:20:49.494        "data_size": 65536
00:20:49.494      },
00:20:49.494      {
00:20:49.494        "name": "BaseBdev3",
00:20:49.494        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:49.494        "is_configured": true,
00:20:49.494        "data_offset": 0,
00:20:49.494        "data_size": 65536
00:20:49.494      },
00:20:49.494      {
00:20:49.494        "name": "BaseBdev4",
00:20:49.494        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:49.494        "is_configured": true,
00:20:49.494        "data_offset": 0,
00:20:49.494        "data_size": 65536
00:20:49.494      }
00:20:49.494    ]
00:20:49.494  }'
00:20:49.494    23:54:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:49.494    23:54:20	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:49.494   23:54:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:49.494    23:54:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:49.494    23:54:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:49.753   23:54:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:49.753    "name": "raid_bdev1",
00:20:49.753    "uuid": "9ca1c9f2-492b-4cf7-85af-c0980ac431a3",
00:20:49.753    "strip_size_kb": 0,
00:20:49.753    "state": "online",
00:20:49.753    "raid_level": "raid1",
00:20:49.753    "superblock": false,
00:20:49.753    "num_base_bdevs": 4,
00:20:49.753    "num_base_bdevs_discovered": 3,
00:20:49.753    "num_base_bdevs_operational": 3,
00:20:49.753    "base_bdevs_list": [
00:20:49.753      {
00:20:49.753        "name": "spare",
00:20:49.753        "uuid": "d8729656-f43d-522a-afae-223f10a2de71",
00:20:49.753        "is_configured": true,
00:20:49.753        "data_offset": 0,
00:20:49.753        "data_size": 65536
00:20:49.753      },
00:20:49.753      {
00:20:49.753        "name": null,
00:20:49.753        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:49.753        "is_configured": false,
00:20:49.753        "data_offset": 0,
00:20:49.753        "data_size": 65536
00:20:49.753      },
00:20:49.753      {
00:20:49.753        "name": "BaseBdev3",
00:20:49.753        "uuid": "54f2e0a9-53b2-4f0d-aed7-e0c0e063e5a9",
00:20:49.753        "is_configured": true,
00:20:49.753        "data_offset": 0,
00:20:49.753        "data_size": 65536
00:20:49.753      },
00:20:49.753      {
00:20:49.753        "name": "BaseBdev4",
00:20:49.753        "uuid": "ddf74652-2c0e-4626-a439-7375df17a3b7",
00:20:49.753        "is_configured": true,
00:20:49.753        "data_offset": 0,
00:20:49.753        "data_size": 65536
00:20:49.753      }
00:20:49.753    ]
00:20:49.753  }'
00:20:49.753   23:54:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:49.753   23:54:20	-- common/autotest_common.sh@10 -- # set +x
00:20:50.320   23:54:20	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:20:50.579  [2024-12-13 23:54:21.209458] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:50.579  [2024-12-13 23:54:21.209663] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:20:50.579  [2024-12-13 23:54:21.209899] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:50.579  [2024-12-13 23:54:21.210121] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:50.579  [2024-12-13 23:54:21.210244] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:20:50.579    23:54:21	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:50.579    23:54:21	-- bdev/bdev_raid.sh@671 -- # jq length
00:20:50.838   23:54:21	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:20:50.838   23:54:21	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:20:50.838   23:54:21	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@12 -- # local i
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:50.838   23:54:21	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:20:51.096  /dev/nbd0
00:20:51.096    23:54:21	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:51.096   23:54:21	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:51.096   23:54:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:51.096   23:54:21	-- common/autotest_common.sh@867 -- # local i
00:20:51.096   23:54:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:51.096   23:54:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:51.096   23:54:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:51.096   23:54:21	-- common/autotest_common.sh@871 -- # break
00:20:51.096   23:54:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:51.096   23:54:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:51.096   23:54:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:51.096  1+0 records in
00:20:51.096  1+0 records out
00:20:51.096  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049196 s, 8.3 MB/s
00:20:51.096    23:54:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.096   23:54:21	-- common/autotest_common.sh@884 -- # size=4096
00:20:51.096   23:54:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.096   23:54:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:51.096   23:54:21	-- common/autotest_common.sh@887 -- # return 0
00:20:51.096   23:54:21	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:51.096   23:54:21	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:51.096   23:54:21	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:20:51.355  /dev/nbd1
00:20:51.355    23:54:22	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:20:51.355   23:54:22	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:20:51.355   23:54:22	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:20:51.355   23:54:22	-- common/autotest_common.sh@867 -- # local i
00:20:51.355   23:54:22	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:51.355   23:54:22	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:51.355   23:54:22	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:20:51.613   23:54:22	-- common/autotest_common.sh@871 -- # break
00:20:51.613   23:54:22	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:51.613   23:54:22	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:51.613   23:54:22	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:51.613  1+0 records in
00:20:51.613  1+0 records out
00:20:51.613  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618535 s, 6.6 MB/s
00:20:51.613    23:54:22	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.613   23:54:22	-- common/autotest_common.sh@884 -- # size=4096
00:20:51.613   23:54:22	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.613   23:54:22	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:51.613   23:54:22	-- common/autotest_common.sh@887 -- # return 0
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:51.613   23:54:22	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:20:51.613   23:54:22	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@51 -- # local i
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:51.613   23:54:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:51.871    23:54:22	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@41 -- # break
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@45 -- # return 0
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:51.871   23:54:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:20:52.130    23:54:22	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@41 -- # break
00:20:52.130   23:54:22	-- bdev/nbd_common.sh@45 -- # return 0
00:20:52.130   23:54:22	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:20:52.130   23:54:22	-- bdev/bdev_raid.sh@709 -- # killprocess 124451
00:20:52.130   23:54:22	-- common/autotest_common.sh@936 -- # '[' -z 124451 ']'
00:20:52.130   23:54:22	-- common/autotest_common.sh@940 -- # kill -0 124451
00:20:52.130    23:54:22	-- common/autotest_common.sh@941 -- # uname
00:20:52.130   23:54:22	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:52.130    23:54:22	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124451
00:20:52.130   23:54:22	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:52.130   23:54:22	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:52.130   23:54:22	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 124451'
00:20:52.130  killing process with pid 124451
00:20:52.130   23:54:22	-- common/autotest_common.sh@955 -- # kill 124451
00:20:52.130  Received shutdown signal, test time was about 60.000000 seconds
00:20:52.130  
00:20:52.130                                                                                                  Latency(us)
00:20:52.130  
[2024-12-13T23:54:22.862Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:52.130  
[2024-12-13T23:54:22.862Z]  ===================================================================================================================
00:20:52.130  
[2024-12-13T23:54:22.862Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:20:52.130   23:54:22	-- common/autotest_common.sh@960 -- # wait 124451
00:20:52.130  [2024-12-13 23:54:22.832278] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:20:52.697  [2024-12-13 23:54:23.163567] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:20:53.634  ************************************
00:20:53.634  END TEST raid_rebuild_test
00:20:53.634  ************************************
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@711 -- # return 0
00:20:53.634  
00:20:53.634  real	0m22.472s
00:20:53.634  user	0m31.127s
00:20:53.634  sys	0m3.840s
00:20:53.634   23:54:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:53.634   23:54:24	-- common/autotest_common.sh@10 -- # set +x
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false
00:20:53.634   23:54:24	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:53.634   23:54:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:53.634   23:54:24	-- common/autotest_common.sh@10 -- # set +x
00:20:53.634  ************************************
00:20:53.634  START TEST raid_rebuild_test_sb
00:20:53.634  ************************************
00:20:53.634   23:54:24	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:53.634    23:54:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@544 -- # raid_pid=125002
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@545 -- # waitforlisten 125002 /var/tmp/spdk-raid.sock
00:20:53.634   23:54:24	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:53.634   23:54:24	-- common/autotest_common.sh@829 -- # '[' -z 125002 ']'
00:20:53.634   23:54:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:53.634   23:54:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:53.634   23:54:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:53.634  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:53.634   23:54:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:53.634   23:54:24	-- common/autotest_common.sh@10 -- # set +x
00:20:53.634  [2024-12-13 23:54:24.293804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:53.634  [2024-12-13 23:54:24.294811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125002 ]
00:20:53.634  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:53.634  Zero copy mechanism will not be used.
00:20:53.894  [2024-12-13 23:54:24.459394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:53.894  [2024-12-13 23:54:24.625096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:54.152  [2024-12-13 23:54:24.797406] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:54.719   23:54:25	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:54.719   23:54:25	-- common/autotest_common.sh@862 -- # return 0
00:20:54.719   23:54:25	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:54.719   23:54:25	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:54.719   23:54:25	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:20:54.719  BaseBdev1_malloc
00:20:54.719   23:54:25	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:20:54.977  [2024-12-13 23:54:25.616041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:20:54.977  [2024-12-13 23:54:25.616418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:54.977  [2024-12-13 23:54:25.616609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:20:54.977  [2024-12-13 23:54:25.616779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:54.977  [2024-12-13 23:54:25.619048] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:54.977  [2024-12-13 23:54:25.619230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:20:54.977  BaseBdev1
00:20:54.977   23:54:25	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:54.977   23:54:25	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:54.977   23:54:25	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:20:55.236  BaseBdev2_malloc
00:20:55.236   23:54:25	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:20:55.494  [2024-12-13 23:54:26.152977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:20:55.494  [2024-12-13 23:54:26.153193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:55.494  [2024-12-13 23:54:26.153279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:20:55.494  [2024-12-13 23:54:26.153432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:55.494  [2024-12-13 23:54:26.155770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:55.495  [2024-12-13 23:54:26.155975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:20:55.495  BaseBdev2
00:20:55.495   23:54:26	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:55.495   23:54:26	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:55.495   23:54:26	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:20:55.753  BaseBdev3_malloc
00:20:55.753   23:54:26	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:20:56.012  [2024-12-13 23:54:26.565843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:20:56.012  [2024-12-13 23:54:26.566050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:56.012  [2024-12-13 23:54:26.566126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:20:56.012  [2024-12-13 23:54:26.566267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:56.012  [2024-12-13 23:54:26.568508] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:56.012  [2024-12-13 23:54:26.568675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:20:56.012  BaseBdev3
00:20:56.012   23:54:26	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:56.012   23:54:26	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:56.012   23:54:26	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:20:56.270  BaseBdev4_malloc
00:20:56.270   23:54:26	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:20:56.270  [2024-12-13 23:54:26.974548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:20:56.270  [2024-12-13 23:54:26.974610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:56.270  [2024-12-13 23:54:26.974641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:20:56.270  [2024-12-13 23:54:26.974684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:56.270  [2024-12-13 23:54:26.976875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:56.270  [2024-12-13 23:54:26.976924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:20:56.270  BaseBdev4
00:20:56.270   23:54:26	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:56.529  spare_malloc
00:20:56.529   23:54:27	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:56.787  spare_delay
00:20:56.787   23:54:27	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:57.046  [2024-12-13 23:54:27.627412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:57.046  [2024-12-13 23:54:27.627490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:57.046  [2024-12-13 23:54:27.627519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:20:57.046  [2024-12-13 23:54:27.627562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:57.046  [2024-12-13 23:54:27.629765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:57.046  [2024-12-13 23:54:27.629821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:57.046  spare
00:20:57.046   23:54:27	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:20:57.304  [2024-12-13 23:54:27.851522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:57.304  [2024-12-13 23:54:27.853371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:57.304  [2024-12-13 23:54:27.853459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:57.304  [2024-12-13 23:54:27.853514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:57.304  [2024-12-13 23:54:27.853711] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:20:57.304  [2024-12-13 23:54:27.853724] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:20:57.304  [2024-12-13 23:54:27.853825] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:20:57.304  [2024-12-13 23:54:27.854151] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:20:57.305  [2024-12-13 23:54:27.854172] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:20:57.305  [2024-12-13 23:54:27.854285] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:57.305   23:54:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:57.305    23:54:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:57.305    23:54:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:57.563   23:54:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:57.563    "name": "raid_bdev1",
00:20:57.563    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:20:57.563    "strip_size_kb": 0,
00:20:57.563    "state": "online",
00:20:57.563    "raid_level": "raid1",
00:20:57.563    "superblock": true,
00:20:57.563    "num_base_bdevs": 4,
00:20:57.563    "num_base_bdevs_discovered": 4,
00:20:57.563    "num_base_bdevs_operational": 4,
00:20:57.563    "base_bdevs_list": [
00:20:57.563      {
00:20:57.563        "name": "BaseBdev1",
00:20:57.563        "uuid": "415703a6-e128-5eab-bdbf-2a473122b7a6",
00:20:57.563        "is_configured": true,
00:20:57.563        "data_offset": 2048,
00:20:57.563        "data_size": 63488
00:20:57.563      },
00:20:57.563      {
00:20:57.563        "name": "BaseBdev2",
00:20:57.563        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:20:57.563        "is_configured": true,
00:20:57.563        "data_offset": 2048,
00:20:57.563        "data_size": 63488
00:20:57.563      },
00:20:57.563      {
00:20:57.563        "name": "BaseBdev3",
00:20:57.563        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:20:57.563        "is_configured": true,
00:20:57.563        "data_offset": 2048,
00:20:57.563        "data_size": 63488
00:20:57.563      },
00:20:57.563      {
00:20:57.563        "name": "BaseBdev4",
00:20:57.563        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:20:57.563        "is_configured": true,
00:20:57.563        "data_offset": 2048,
00:20:57.563        "data_size": 63488
00:20:57.563      }
00:20:57.563    ]
00:20:57.563  }'
00:20:57.563   23:54:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:57.563   23:54:28	-- common/autotest_common.sh@10 -- # set +x
00:20:58.130    23:54:28	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:20:58.130    23:54:28	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:20:58.130  [2024-12-13 23:54:28.855813] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:20:58.389   23:54:28	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:20:58.389    23:54:28	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:58.389    23:54:28	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:20:58.389   23:54:29	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:20:58.389   23:54:29	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:20:58.389   23:54:29	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:20:58.389   23:54:29	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:20:58.389   23:54:29	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:58.389   23:54:29	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:20:58.389   23:54:29	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:58.389   23:54:29	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:58.648   23:54:29	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:58.648   23:54:29	-- bdev/nbd_common.sh@12 -- # local i
00:20:58.648   23:54:29	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:58.648   23:54:29	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:58.648   23:54:29	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:20:58.648  [2024-12-13 23:54:29.355739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:20:58.648  /dev/nbd0
00:20:58.907    23:54:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:58.907   23:54:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:58.907   23:54:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:58.907   23:54:29	-- common/autotest_common.sh@867 -- # local i
00:20:58.907   23:54:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:58.907   23:54:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:58.907   23:54:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:58.907   23:54:29	-- common/autotest_common.sh@871 -- # break
00:20:58.907   23:54:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:58.907   23:54:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:58.907   23:54:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:58.907  1+0 records in
00:20:58.907  1+0 records out
00:20:58.907  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027049 s, 15.1 MB/s
00:20:58.907    23:54:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:58.907   23:54:29	-- common/autotest_common.sh@884 -- # size=4096
00:20:58.907   23:54:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:58.907   23:54:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:58.907   23:54:29	-- common/autotest_common.sh@887 -- # return 0
00:20:58.907   23:54:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:58.907   23:54:29	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:58.907   23:54:29	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:20:58.907   23:54:29	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:20:58.907   23:54:29	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:21:04.202  63488+0 records in
00:21:04.202  63488+0 records out
00:21:04.202  32505856 bytes (33 MB, 31 MiB) copied, 5.50831 s, 5.9 MB/s
00:21:04.202   23:54:34	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@51 -- # local i
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:04.202   23:54:34	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:04.460    23:54:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:04.460  [2024-12-13 23:54:35.187758] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@41 -- # break
00:21:04.460   23:54:35	-- bdev/nbd_common.sh@45 -- # return 0
00:21:04.460   23:54:35	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:21:04.718  [2024-12-13 23:54:35.447376] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:04.976   23:54:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:04.976    23:54:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:04.976    23:54:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:05.234   23:54:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:05.234    "name": "raid_bdev1",
00:21:05.234    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:05.235    "strip_size_kb": 0,
00:21:05.235    "state": "online",
00:21:05.235    "raid_level": "raid1",
00:21:05.235    "superblock": true,
00:21:05.235    "num_base_bdevs": 4,
00:21:05.235    "num_base_bdevs_discovered": 3,
00:21:05.235    "num_base_bdevs_operational": 3,
00:21:05.235    "base_bdevs_list": [
00:21:05.235      {
00:21:05.235        "name": null,
00:21:05.235        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:05.235        "is_configured": false,
00:21:05.235        "data_offset": 2048,
00:21:05.235        "data_size": 63488
00:21:05.235      },
00:21:05.235      {
00:21:05.235        "name": "BaseBdev2",
00:21:05.235        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:21:05.235        "is_configured": true,
00:21:05.235        "data_offset": 2048,
00:21:05.235        "data_size": 63488
00:21:05.235      },
00:21:05.235      {
00:21:05.235        "name": "BaseBdev3",
00:21:05.235        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:05.235        "is_configured": true,
00:21:05.235        "data_offset": 2048,
00:21:05.235        "data_size": 63488
00:21:05.235      },
00:21:05.235      {
00:21:05.235        "name": "BaseBdev4",
00:21:05.235        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:05.235        "is_configured": true,
00:21:05.235        "data_offset": 2048,
00:21:05.235        "data_size": 63488
00:21:05.235      }
00:21:05.235    ]
00:21:05.235  }'
00:21:05.235   23:54:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:05.235   23:54:35	-- common/autotest_common.sh@10 -- # set +x
00:21:05.802   23:54:36	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:05.802  [2024-12-13 23:54:36.499680] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:05.802  [2024-12-13 23:54:36.499835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:05.802  [2024-12-13 23:54:36.510443] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0
00:21:05.802  [2024-12-13 23:54:36.512526] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:05.802   23:54:36	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:07.178    23:54:37	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:07.178    23:54:37	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:07.178    "name": "raid_bdev1",
00:21:07.178    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:07.178    "strip_size_kb": 0,
00:21:07.178    "state": "online",
00:21:07.178    "raid_level": "raid1",
00:21:07.178    "superblock": true,
00:21:07.178    "num_base_bdevs": 4,
00:21:07.178    "num_base_bdevs_discovered": 4,
00:21:07.178    "num_base_bdevs_operational": 4,
00:21:07.178    "process": {
00:21:07.178      "type": "rebuild",
00:21:07.178      "target": "spare",
00:21:07.178      "progress": {
00:21:07.178        "blocks": 26624,
00:21:07.178        "percent": 41
00:21:07.178      }
00:21:07.178    },
00:21:07.178    "base_bdevs_list": [
00:21:07.178      {
00:21:07.178        "name": "spare",
00:21:07.178        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:07.178        "is_configured": true,
00:21:07.178        "data_offset": 2048,
00:21:07.178        "data_size": 63488
00:21:07.178      },
00:21:07.178      {
00:21:07.178        "name": "BaseBdev2",
00:21:07.178        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:21:07.178        "is_configured": true,
00:21:07.178        "data_offset": 2048,
00:21:07.178        "data_size": 63488
00:21:07.178      },
00:21:07.178      {
00:21:07.178        "name": "BaseBdev3",
00:21:07.178        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:07.178        "is_configured": true,
00:21:07.178        "data_offset": 2048,
00:21:07.178        "data_size": 63488
00:21:07.178      },
00:21:07.178      {
00:21:07.178        "name": "BaseBdev4",
00:21:07.178        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:07.178        "is_configured": true,
00:21:07.178        "data_offset": 2048,
00:21:07.178        "data_size": 63488
00:21:07.178      }
00:21:07.178    ]
00:21:07.178  }'
00:21:07.178    23:54:37	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:07.178   23:54:37	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:07.178    23:54:37	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:07.437   23:54:37	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:07.437   23:54:37	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:21:07.437  [2024-12-13 23:54:38.155416] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:21:07.696  [2024-12-13 23:54:38.223376] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:21:07.696  [2024-12-13 23:54:38.223445] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:07.696   23:54:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:07.696    23:54:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:07.696    23:54:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:07.954   23:54:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:07.954    "name": "raid_bdev1",
00:21:07.954    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:07.954    "strip_size_kb": 0,
00:21:07.954    "state": "online",
00:21:07.954    "raid_level": "raid1",
00:21:07.954    "superblock": true,
00:21:07.954    "num_base_bdevs": 4,
00:21:07.954    "num_base_bdevs_discovered": 3,
00:21:07.954    "num_base_bdevs_operational": 3,
00:21:07.955    "base_bdevs_list": [
00:21:07.955      {
00:21:07.955        "name": null,
00:21:07.955        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:07.955        "is_configured": false,
00:21:07.955        "data_offset": 2048,
00:21:07.955        "data_size": 63488
00:21:07.955      },
00:21:07.955      {
00:21:07.955        "name": "BaseBdev2",
00:21:07.955        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:21:07.955        "is_configured": true,
00:21:07.955        "data_offset": 2048,
00:21:07.955        "data_size": 63488
00:21:07.955      },
00:21:07.955      {
00:21:07.955        "name": "BaseBdev3",
00:21:07.955        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:07.955        "is_configured": true,
00:21:07.955        "data_offset": 2048,
00:21:07.955        "data_size": 63488
00:21:07.955      },
00:21:07.955      {
00:21:07.955        "name": "BaseBdev4",
00:21:07.955        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:07.955        "is_configured": true,
00:21:07.955        "data_offset": 2048,
00:21:07.955        "data_size": 63488
00:21:07.955      }
00:21:07.955    ]
00:21:07.955  }'
00:21:07.955   23:54:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:07.955   23:54:38	-- common/autotest_common.sh@10 -- # set +x
00:21:08.522   23:54:39	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:08.522   23:54:39	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:08.522   23:54:39	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:08.522   23:54:39	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:08.522   23:54:39	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:08.522    23:54:39	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:08.522    23:54:39	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:08.781   23:54:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:08.781    "name": "raid_bdev1",
00:21:08.781    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:08.781    "strip_size_kb": 0,
00:21:08.781    "state": "online",
00:21:08.781    "raid_level": "raid1",
00:21:08.781    "superblock": true,
00:21:08.781    "num_base_bdevs": 4,
00:21:08.781    "num_base_bdevs_discovered": 3,
00:21:08.781    "num_base_bdevs_operational": 3,
00:21:08.781    "base_bdevs_list": [
00:21:08.781      {
00:21:08.781        "name": null,
00:21:08.781        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:08.781        "is_configured": false,
00:21:08.781        "data_offset": 2048,
00:21:08.781        "data_size": 63488
00:21:08.781      },
00:21:08.781      {
00:21:08.781        "name": "BaseBdev2",
00:21:08.781        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:21:08.781        "is_configured": true,
00:21:08.781        "data_offset": 2048,
00:21:08.781        "data_size": 63488
00:21:08.781      },
00:21:08.781      {
00:21:08.781        "name": "BaseBdev3",
00:21:08.781        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:08.781        "is_configured": true,
00:21:08.781        "data_offset": 2048,
00:21:08.781        "data_size": 63488
00:21:08.781      },
00:21:08.781      {
00:21:08.781        "name": "BaseBdev4",
00:21:08.781        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:08.781        "is_configured": true,
00:21:08.781        "data_offset": 2048,
00:21:08.781        "data_size": 63488
00:21:08.781      }
00:21:08.781    ]
00:21:08.781  }'
00:21:08.781    23:54:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:08.781   23:54:39	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:08.781    23:54:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:08.781   23:54:39	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:08.781   23:54:39	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:09.040  [2024-12-13 23:54:39.735030] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:09.040  [2024-12-13 23:54:39.735070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:09.040  [2024-12-13 23:54:39.744999] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360
00:21:09.040  [2024-12-13 23:54:39.746822] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:09.040   23:54:39	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:10.418    23:54:40	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:10.418    23:54:40	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:10.418   23:54:40	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:10.418    "name": "raid_bdev1",
00:21:10.418    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:10.418    "strip_size_kb": 0,
00:21:10.418    "state": "online",
00:21:10.418    "raid_level": "raid1",
00:21:10.418    "superblock": true,
00:21:10.418    "num_base_bdevs": 4,
00:21:10.418    "num_base_bdevs_discovered": 4,
00:21:10.418    "num_base_bdevs_operational": 4,
00:21:10.418    "process": {
00:21:10.418      "type": "rebuild",
00:21:10.418      "target": "spare",
00:21:10.418      "progress": {
00:21:10.418        "blocks": 24576,
00:21:10.418        "percent": 38
00:21:10.418      }
00:21:10.418    },
00:21:10.418    "base_bdevs_list": [
00:21:10.418      {
00:21:10.418        "name": "spare",
00:21:10.418        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:10.418        "is_configured": true,
00:21:10.418        "data_offset": 2048,
00:21:10.418        "data_size": 63488
00:21:10.418      },
00:21:10.418      {
00:21:10.418        "name": "BaseBdev2",
00:21:10.418        "uuid": "cbaaceb9-b765-5b83-956a-014832e76d85",
00:21:10.418        "is_configured": true,
00:21:10.418        "data_offset": 2048,
00:21:10.418        "data_size": 63488
00:21:10.418      },
00:21:10.418      {
00:21:10.418        "name": "BaseBdev3",
00:21:10.418        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:10.418        "is_configured": true,
00:21:10.418        "data_offset": 2048,
00:21:10.418        "data_size": 63488
00:21:10.418      },
00:21:10.418      {
00:21:10.418        "name": "BaseBdev4",
00:21:10.418        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:10.418        "is_configured": true,
00:21:10.418        "data_offset": 2048,
00:21:10.418        "data_size": 63488
00:21:10.418      }
00:21:10.418    ]
00:21:10.418  }'
00:21:10.418    23:54:40	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:10.418    23:54:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:21:10.418  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:21:10.418   23:54:41	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:21:10.677  [2024-12-13 23:54:41.315573] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:10.677  [2024-12-13 23:54:41.355539] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:10.935   23:54:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:10.935    23:54:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:10.935    23:54:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:11.193   23:54:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:11.193    "name": "raid_bdev1",
00:21:11.193    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:11.193    "strip_size_kb": 0,
00:21:11.193    "state": "online",
00:21:11.193    "raid_level": "raid1",
00:21:11.193    "superblock": true,
00:21:11.193    "num_base_bdevs": 4,
00:21:11.193    "num_base_bdevs_discovered": 3,
00:21:11.194    "num_base_bdevs_operational": 3,
00:21:11.194    "process": {
00:21:11.194      "type": "rebuild",
00:21:11.194      "target": "spare",
00:21:11.194      "progress": {
00:21:11.194        "blocks": 38912,
00:21:11.194        "percent": 61
00:21:11.194      }
00:21:11.194    },
00:21:11.194    "base_bdevs_list": [
00:21:11.194      {
00:21:11.194        "name": "spare",
00:21:11.194        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:11.194        "is_configured": true,
00:21:11.194        "data_offset": 2048,
00:21:11.194        "data_size": 63488
00:21:11.194      },
00:21:11.194      {
00:21:11.194        "name": null,
00:21:11.194        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:11.194        "is_configured": false,
00:21:11.194        "data_offset": 2048,
00:21:11.194        "data_size": 63488
00:21:11.194      },
00:21:11.194      {
00:21:11.194        "name": "BaseBdev3",
00:21:11.194        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:11.194        "is_configured": true,
00:21:11.194        "data_offset": 2048,
00:21:11.194        "data_size": 63488
00:21:11.194      },
00:21:11.194      {
00:21:11.194        "name": "BaseBdev4",
00:21:11.194        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:11.194        "is_configured": true,
00:21:11.194        "data_offset": 2048,
00:21:11.194        "data_size": 63488
00:21:11.194      }
00:21:11.194    ]
00:21:11.194  }'
00:21:11.194    23:54:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:11.194    23:54:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@657 -- # local timeout=492
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:11.194   23:54:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:11.194    23:54:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:11.194    23:54:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:11.452   23:54:42	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:11.452    "name": "raid_bdev1",
00:21:11.452    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:11.452    "strip_size_kb": 0,
00:21:11.452    "state": "online",
00:21:11.452    "raid_level": "raid1",
00:21:11.452    "superblock": true,
00:21:11.452    "num_base_bdevs": 4,
00:21:11.452    "num_base_bdevs_discovered": 3,
00:21:11.452    "num_base_bdevs_operational": 3,
00:21:11.452    "process": {
00:21:11.452      "type": "rebuild",
00:21:11.452      "target": "spare",
00:21:11.452      "progress": {
00:21:11.452        "blocks": 47104,
00:21:11.452        "percent": 74
00:21:11.452      }
00:21:11.452    },
00:21:11.452    "base_bdevs_list": [
00:21:11.452      {
00:21:11.452        "name": "spare",
00:21:11.452        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:11.452        "is_configured": true,
00:21:11.452        "data_offset": 2048,
00:21:11.452        "data_size": 63488
00:21:11.452      },
00:21:11.452      {
00:21:11.452        "name": null,
00:21:11.452        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:11.452        "is_configured": false,
00:21:11.452        "data_offset": 2048,
00:21:11.452        "data_size": 63488
00:21:11.452      },
00:21:11.452      {
00:21:11.452        "name": "BaseBdev3",
00:21:11.452        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:11.452        "is_configured": true,
00:21:11.452        "data_offset": 2048,
00:21:11.452        "data_size": 63488
00:21:11.452      },
00:21:11.452      {
00:21:11.452        "name": "BaseBdev4",
00:21:11.452        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:11.452        "is_configured": true,
00:21:11.452        "data_offset": 2048,
00:21:11.452        "data_size": 63488
00:21:11.452      }
00:21:11.452    ]
00:21:11.452  }'
00:21:11.452    23:54:42	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:11.452   23:54:42	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:11.452    23:54:42	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:11.452   23:54:42	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:11.452   23:54:42	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:12.388  [2024-12-13 23:54:42.863978] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:21:12.388  [2024-12-13 23:54:42.864044] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:21:12.388  [2024-12-13 23:54:42.864183] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:12.648   23:54:43	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:12.648    23:54:43	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:12.648    23:54:43	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:12.921   23:54:43	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:12.921    "name": "raid_bdev1",
00:21:12.921    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:12.921    "strip_size_kb": 0,
00:21:12.921    "state": "online",
00:21:12.921    "raid_level": "raid1",
00:21:12.921    "superblock": true,
00:21:12.921    "num_base_bdevs": 4,
00:21:12.921    "num_base_bdevs_discovered": 3,
00:21:12.922    "num_base_bdevs_operational": 3,
00:21:12.922    "base_bdevs_list": [
00:21:12.922      {
00:21:12.922        "name": "spare",
00:21:12.922        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:12.922        "is_configured": true,
00:21:12.922        "data_offset": 2048,
00:21:12.922        "data_size": 63488
00:21:12.922      },
00:21:12.922      {
00:21:12.922        "name": null,
00:21:12.922        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:12.922        "is_configured": false,
00:21:12.922        "data_offset": 2048,
00:21:12.922        "data_size": 63488
00:21:12.922      },
00:21:12.922      {
00:21:12.922        "name": "BaseBdev3",
00:21:12.922        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:12.922        "is_configured": true,
00:21:12.922        "data_offset": 2048,
00:21:12.922        "data_size": 63488
00:21:12.922      },
00:21:12.922      {
00:21:12.922        "name": "BaseBdev4",
00:21:12.922        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:12.922        "is_configured": true,
00:21:12.922        "data_offset": 2048,
00:21:12.922        "data_size": 63488
00:21:12.922      }
00:21:12.922    ]
00:21:12.922  }'
00:21:12.922    23:54:43	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:21:12.922    23:54:43	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@660 -- # break
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:12.922   23:54:43	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:12.922    23:54:43	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:12.922    23:54:43	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:13.193    "name": "raid_bdev1",
00:21:13.193    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:13.193    "strip_size_kb": 0,
00:21:13.193    "state": "online",
00:21:13.193    "raid_level": "raid1",
00:21:13.193    "superblock": true,
00:21:13.193    "num_base_bdevs": 4,
00:21:13.193    "num_base_bdevs_discovered": 3,
00:21:13.193    "num_base_bdevs_operational": 3,
00:21:13.193    "base_bdevs_list": [
00:21:13.193      {
00:21:13.193        "name": "spare",
00:21:13.193        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:13.193        "is_configured": true,
00:21:13.193        "data_offset": 2048,
00:21:13.193        "data_size": 63488
00:21:13.193      },
00:21:13.193      {
00:21:13.193        "name": null,
00:21:13.193        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:13.193        "is_configured": false,
00:21:13.193        "data_offset": 2048,
00:21:13.193        "data_size": 63488
00:21:13.193      },
00:21:13.193      {
00:21:13.193        "name": "BaseBdev3",
00:21:13.193        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:13.193        "is_configured": true,
00:21:13.193        "data_offset": 2048,
00:21:13.193        "data_size": 63488
00:21:13.193      },
00:21:13.193      {
00:21:13.193        "name": "BaseBdev4",
00:21:13.193        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:13.193        "is_configured": true,
00:21:13.193        "data_offset": 2048,
00:21:13.193        "data_size": 63488
00:21:13.193      }
00:21:13.193    ]
00:21:13.193  }'
00:21:13.193    23:54:43	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:13.193    23:54:43	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:13.193   23:54:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:13.193    23:54:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:13.193    23:54:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:13.452   23:54:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:13.452    "name": "raid_bdev1",
00:21:13.452    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:13.452    "strip_size_kb": 0,
00:21:13.452    "state": "online",
00:21:13.452    "raid_level": "raid1",
00:21:13.452    "superblock": true,
00:21:13.452    "num_base_bdevs": 4,
00:21:13.452    "num_base_bdevs_discovered": 3,
00:21:13.452    "num_base_bdevs_operational": 3,
00:21:13.452    "base_bdevs_list": [
00:21:13.452      {
00:21:13.452        "name": "spare",
00:21:13.452        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:13.452        "is_configured": true,
00:21:13.452        "data_offset": 2048,
00:21:13.452        "data_size": 63488
00:21:13.452      },
00:21:13.452      {
00:21:13.452        "name": null,
00:21:13.452        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:13.452        "is_configured": false,
00:21:13.452        "data_offset": 2048,
00:21:13.452        "data_size": 63488
00:21:13.452      },
00:21:13.452      {
00:21:13.452        "name": "BaseBdev3",
00:21:13.452        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:13.452        "is_configured": true,
00:21:13.452        "data_offset": 2048,
00:21:13.452        "data_size": 63488
00:21:13.452      },
00:21:13.452      {
00:21:13.452        "name": "BaseBdev4",
00:21:13.452        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:13.452        "is_configured": true,
00:21:13.452        "data_offset": 2048,
00:21:13.452        "data_size": 63488
00:21:13.452      }
00:21:13.452    ]
00:21:13.452  }'
00:21:13.452   23:54:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:13.452   23:54:44	-- common/autotest_common.sh@10 -- # set +x
00:21:14.019   23:54:44	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:21:14.278  [2024-12-13 23:54:44.930579] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:14.278  [2024-12-13 23:54:44.930608] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:14.278  [2024-12-13 23:54:44.930718] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:14.278  [2024-12-13 23:54:44.930803] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:14.278  [2024-12-13 23:54:44.930814] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:21:14.278    23:54:44	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:14.278    23:54:44	-- bdev/bdev_raid.sh@671 -- # jq length
00:21:14.536   23:54:45	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:21:14.536   23:54:45	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:21:14.536   23:54:45	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@12 -- # local i
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:21:14.536   23:54:45	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:21:14.794  /dev/nbd0
00:21:14.794    23:54:45	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:21:14.794   23:54:45	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:21:14.794   23:54:45	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:21:14.794   23:54:45	-- common/autotest_common.sh@867 -- # local i
00:21:14.794   23:54:45	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:14.794   23:54:45	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:14.794   23:54:45	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:21:14.794   23:54:45	-- common/autotest_common.sh@871 -- # break
00:21:14.794   23:54:45	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:14.794   23:54:45	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:14.794   23:54:45	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:14.794  1+0 records in
00:21:14.794  1+0 records out
00:21:14.794  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498331 s, 8.2 MB/s
00:21:14.794    23:54:45	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:14.794   23:54:45	-- common/autotest_common.sh@884 -- # size=4096
00:21:14.794   23:54:45	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:14.794   23:54:45	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:14.794   23:54:45	-- common/autotest_common.sh@887 -- # return 0
00:21:14.794   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:14.794   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:21:14.794   23:54:45	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:21:15.052  /dev/nbd1
00:21:15.052    23:54:45	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:15.052   23:54:45	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:15.052   23:54:45	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:15.052   23:54:45	-- common/autotest_common.sh@867 -- # local i
00:21:15.052   23:54:45	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:15.052   23:54:45	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:15.052   23:54:45	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:15.052   23:54:45	-- common/autotest_common.sh@871 -- # break
00:21:15.052   23:54:45	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:15.052   23:54:45	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:15.052   23:54:45	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:15.052  1+0 records in
00:21:15.052  1+0 records out
00:21:15.052  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397245 s, 10.3 MB/s
00:21:15.052    23:54:45	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:15.052   23:54:45	-- common/autotest_common.sh@884 -- # size=4096
00:21:15.052   23:54:45	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:15.052   23:54:45	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:15.052   23:54:45	-- common/autotest_common.sh@887 -- # return 0
00:21:15.052   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:15.052   23:54:45	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:21:15.052   23:54:45	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:21:15.311   23:54:45	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@51 -- # local i
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:15.311   23:54:45	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:15.570    23:54:46	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@41 -- # break
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@45 -- # return 0
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:15.570   23:54:46	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:15.828    23:54:46	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@41 -- # break
00:21:15.828   23:54:46	-- bdev/nbd_common.sh@45 -- # return 0
00:21:15.828   23:54:46	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:21:15.828   23:54:46	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:15.828   23:54:46	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:21:15.828   23:54:46	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:21:16.087  [2024-12-13 23:54:46.786977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:21:16.087  [2024-12-13 23:54:46.787056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:16.087  [2024-12-13 23:54:46.787099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:21:16.087  [2024-12-13 23:54:46.787121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:16.087  [2024-12-13 23:54:46.789456] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:16.087  [2024-12-13 23:54:46.789522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:21:16.087  [2024-12-13 23:54:46.789638] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:21:16.087  [2024-12-13 23:54:46.789689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:16.087  BaseBdev1
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@695 -- # '[' -z '' ']'
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@696 -- # continue
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:21:16.087   23:54:46	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:21:16.346   23:54:46	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:21:16.605  [2024-12-13 23:54:47.171013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:21:16.605  [2024-12-13 23:54:47.171244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:16.605  [2024-12-13 23:54:47.171322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:21:16.605  [2024-12-13 23:54:47.171449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:16.605  [2024-12-13 23:54:47.171891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:16.605  [2024-12-13 23:54:47.172069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:21:16.605  [2024-12-13 23:54:47.172279] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:21:16.605  [2024-12-13 23:54:47.172377] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1)
00:21:16.605  [2024-12-13 23:54:47.172483] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:16.605  [2024-12-13 23:54:47.172538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring
00:21:16.605  [2024-12-13 23:54:47.172708] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:16.605  BaseBdev3
00:21:16.605   23:54:47	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:16.605   23:54:47	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:21:16.605   23:54:47	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:21:16.864   23:54:47	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:21:17.123  [2024-12-13 23:54:47.599088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:21:17.123  [2024-12-13 23:54:47.599303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:17.123  [2024-12-13 23:54:47.599401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:21:17.123  [2024-12-13 23:54:47.599530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:17.123  [2024-12-13 23:54:47.599998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:17.123  [2024-12-13 23:54:47.600190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:21:17.123  [2024-12-13 23:54:47.600386] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:21:17.123  [2024-12-13 23:54:47.600509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:17.123  BaseBdev4
00:21:17.123   23:54:47	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:21:17.382   23:54:47	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:17.382  [2024-12-13 23:54:48.043175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:17.382  [2024-12-13 23:54:48.043390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:17.382  [2024-12-13 23:54:48.043458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:21:17.382  [2024-12-13 23:54:48.043585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:17.382  [2024-12-13 23:54:48.044055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:17.382  [2024-12-13 23:54:48.044235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:17.382  [2024-12-13 23:54:48.044463] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:21:17.382  [2024-12-13 23:54:48.044616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:17.382  spare
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:17.382   23:54:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:17.382    23:54:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:17.382    23:54:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:17.641  [2024-12-13 23:54:48.144760] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080
00:21:17.641  [2024-12-13 23:54:48.144921] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:21:17.641  [2024-12-13 23:54:48.145063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0
00:21:17.641  [2024-12-13 23:54:48.145772] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080
00:21:17.641  [2024-12-13 23:54:48.145908] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080
00:21:17.641  [2024-12-13 23:54:48.146115] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:17.641   23:54:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:17.641    "name": "raid_bdev1",
00:21:17.641    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:17.641    "strip_size_kb": 0,
00:21:17.641    "state": "online",
00:21:17.641    "raid_level": "raid1",
00:21:17.641    "superblock": true,
00:21:17.641    "num_base_bdevs": 4,
00:21:17.641    "num_base_bdevs_discovered": 3,
00:21:17.641    "num_base_bdevs_operational": 3,
00:21:17.641    "base_bdevs_list": [
00:21:17.641      {
00:21:17.641        "name": "spare",
00:21:17.641        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:17.641        "is_configured": true,
00:21:17.641        "data_offset": 2048,
00:21:17.641        "data_size": 63488
00:21:17.641      },
00:21:17.641      {
00:21:17.641        "name": null,
00:21:17.641        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:17.641        "is_configured": false,
00:21:17.641        "data_offset": 2048,
00:21:17.641        "data_size": 63488
00:21:17.641      },
00:21:17.641      {
00:21:17.641        "name": "BaseBdev3",
00:21:17.641        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:17.641        "is_configured": true,
00:21:17.641        "data_offset": 2048,
00:21:17.641        "data_size": 63488
00:21:17.641      },
00:21:17.641      {
00:21:17.641        "name": "BaseBdev4",
00:21:17.641        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:17.641        "is_configured": true,
00:21:17.641        "data_offset": 2048,
00:21:17.641        "data_size": 63488
00:21:17.641      }
00:21:17.641    ]
00:21:17.641  }'
00:21:17.641   23:54:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:17.641   23:54:48	-- common/autotest_common.sh@10 -- # set +x
00:21:18.209   23:54:48	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:18.209   23:54:48	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:18.209   23:54:48	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:18.209   23:54:48	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:18.209   23:54:48	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:18.209    23:54:48	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:18.209    23:54:48	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:18.467   23:54:49	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:18.467    "name": "raid_bdev1",
00:21:18.467    "uuid": "13e61ee0-a40c-42fe-98a2-f5969c06308a",
00:21:18.467    "strip_size_kb": 0,
00:21:18.467    "state": "online",
00:21:18.467    "raid_level": "raid1",
00:21:18.467    "superblock": true,
00:21:18.467    "num_base_bdevs": 4,
00:21:18.467    "num_base_bdevs_discovered": 3,
00:21:18.467    "num_base_bdevs_operational": 3,
00:21:18.467    "base_bdevs_list": [
00:21:18.467      {
00:21:18.467        "name": "spare",
00:21:18.467        "uuid": "e0395d1b-06e9-5435-b324-7ae83e565625",
00:21:18.467        "is_configured": true,
00:21:18.467        "data_offset": 2048,
00:21:18.467        "data_size": 63488
00:21:18.467      },
00:21:18.467      {
00:21:18.467        "name": null,
00:21:18.467        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:18.467        "is_configured": false,
00:21:18.467        "data_offset": 2048,
00:21:18.467        "data_size": 63488
00:21:18.467      },
00:21:18.467      {
00:21:18.467        "name": "BaseBdev3",
00:21:18.467        "uuid": "04b7c2ee-fe4e-5748-ac3d-f360b407dd82",
00:21:18.467        "is_configured": true,
00:21:18.467        "data_offset": 2048,
00:21:18.467        "data_size": 63488
00:21:18.467      },
00:21:18.467      {
00:21:18.467        "name": "BaseBdev4",
00:21:18.467        "uuid": "fc0315ae-1f7d-5bef-b30a-b659c189e931",
00:21:18.467        "is_configured": true,
00:21:18.467        "data_offset": 2048,
00:21:18.467        "data_size": 63488
00:21:18.467      }
00:21:18.467    ]
00:21:18.467  }'
00:21:18.467    23:54:49	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:18.467   23:54:49	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:18.467    23:54:49	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:18.468   23:54:49	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:18.468    23:54:49	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:18.468    23:54:49	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:21:18.726   23:54:49	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:21:18.726   23:54:49	-- bdev/bdev_raid.sh@709 -- # killprocess 125002
00:21:18.726   23:54:49	-- common/autotest_common.sh@936 -- # '[' -z 125002 ']'
00:21:18.726   23:54:49	-- common/autotest_common.sh@940 -- # kill -0 125002
00:21:18.726    23:54:49	-- common/autotest_common.sh@941 -- # uname
00:21:18.726   23:54:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:18.726    23:54:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125002
00:21:18.726   23:54:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:18.726   23:54:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:18.726   23:54:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 125002'
00:21:18.726  killing process with pid 125002
00:21:18.726   23:54:49	-- common/autotest_common.sh@955 -- # kill 125002
00:21:18.726  Received shutdown signal, test time was about 60.000000 seconds
00:21:18.726  
00:21:18.726                                                                                                  Latency(us)
00:21:18.726  
[2024-12-13T23:54:49.458Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:18.726  
[2024-12-13T23:54:49.458Z]  ===================================================================================================================
00:21:18.726  
[2024-12-13T23:54:49.458Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:21:18.726   23:54:49	-- common/autotest_common.sh@960 -- # wait 125002
00:21:18.726  [2024-12-13 23:54:49.409013] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:21:18.726  [2024-12-13 23:54:49.409075] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:18.726  [2024-12-13 23:54:49.409143] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:18.726  [2024-12-13 23:54:49.409154] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline
00:21:19.294  [2024-12-13 23:54:49.740237] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:21:20.230  ************************************
00:21:20.230  END TEST raid_rebuild_test_sb
00:21:20.230  ************************************
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@711 -- # return 0
00:21:20.230  
00:21:20.230  real	0m26.548s
00:21:20.230  user	0m38.577s
00:21:20.230  sys	0m3.893s
00:21:20.230   23:54:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:20.230   23:54:50	-- common/autotest_common.sh@10 -- # set +x
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true
00:21:20.230   23:54:50	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:21:20.230   23:54:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:20.230   23:54:50	-- common/autotest_common.sh@10 -- # set +x
00:21:20.230  ************************************
00:21:20.230  START TEST raid_rebuild_test_io
00:21:20.230  ************************************
00:21:20.230   23:54:50	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:20.230    23:54:50	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@544 -- # raid_pid=125661
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:21:20.230   23:54:50	-- bdev/bdev_raid.sh@545 -- # waitforlisten 125661 /var/tmp/spdk-raid.sock
00:21:20.230   23:54:50	-- common/autotest_common.sh@829 -- # '[' -z 125661 ']'
00:21:20.230   23:54:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:21:20.230   23:54:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:20.230   23:54:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:21:20.230  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:21:20.230   23:54:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:20.230   23:54:50	-- common/autotest_common.sh@10 -- # set +x
00:21:20.230  [2024-12-13 23:54:50.907851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:20.230  [2024-12-13 23:54:50.908244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125661 ]
00:21:20.230  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:20.230  Zero copy mechanism will not be used.
00:21:20.489  [2024-12-13 23:54:51.085528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:20.748  [2024-12-13 23:54:51.265287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:20.748  [2024-12-13 23:54:51.450833] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:21:21.315   23:54:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:21.315   23:54:51	-- common/autotest_common.sh@862 -- # return 0
00:21:21.315   23:54:51	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:21.315   23:54:51	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:21:21.315   23:54:51	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:21:21.315  BaseBdev1
00:21:21.315   23:54:52	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:21.315   23:54:52	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:21:21.315   23:54:52	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:21:21.574  BaseBdev2
00:21:21.574   23:54:52	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:21.574   23:54:52	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:21:21.574   23:54:52	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:21:21.833  BaseBdev3
00:21:21.833   23:54:52	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:21.833   23:54:52	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:21:21.833   23:54:52	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:21:22.091  BaseBdev4
00:21:22.091   23:54:52	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:21:22.352  spare_malloc
00:21:22.352   23:54:52	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:21:22.611  spare_delay
00:21:22.611   23:54:53	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:22.870  [2024-12-13 23:54:53.406740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:22.870  [2024-12-13 23:54:53.406966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:22.870  [2024-12-13 23:54:53.407040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:21:22.870  [2024-12-13 23:54:53.407380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:22.870  [2024-12-13 23:54:53.409717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:22.870  [2024-12-13 23:54:53.409899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:22.870  spare
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:21:22.870  [2024-12-13 23:54:53.578805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:22.870  [2024-12-13 23:54:53.580825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:21:22.870  [2024-12-13 23:54:53.580997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:22.870  [2024-12-13 23:54:53.581074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:22.870  [2024-12-13 23:54:53.581255] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:21:22.870  [2024-12-13 23:54:53.581299] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:21:22.870  [2024-12-13 23:54:53.581549] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:21:22.870  [2024-12-13 23:54:53.581966] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:21:22.870  [2024-12-13 23:54:53.582102] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:21:22.870  [2024-12-13 23:54:53.582341] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:22.870   23:54:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:22.870    23:54:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:22.870    23:54:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:23.129   23:54:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:23.129    "name": "raid_bdev1",
00:21:23.129    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:23.129    "strip_size_kb": 0,
00:21:23.129    "state": "online",
00:21:23.129    "raid_level": "raid1",
00:21:23.129    "superblock": false,
00:21:23.129    "num_base_bdevs": 4,
00:21:23.129    "num_base_bdevs_discovered": 4,
00:21:23.129    "num_base_bdevs_operational": 4,
00:21:23.129    "base_bdevs_list": [
00:21:23.129      {
00:21:23.129        "name": "BaseBdev1",
00:21:23.129        "uuid": "a85b0d75-a8b7-4cb5-9aad-044b80d80531",
00:21:23.129        "is_configured": true,
00:21:23.129        "data_offset": 0,
00:21:23.129        "data_size": 65536
00:21:23.129      },
00:21:23.129      {
00:21:23.129        "name": "BaseBdev2",
00:21:23.129        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:23.129        "is_configured": true,
00:21:23.129        "data_offset": 0,
00:21:23.129        "data_size": 65536
00:21:23.129      },
00:21:23.129      {
00:21:23.129        "name": "BaseBdev3",
00:21:23.129        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:23.129        "is_configured": true,
00:21:23.129        "data_offset": 0,
00:21:23.129        "data_size": 65536
00:21:23.129      },
00:21:23.129      {
00:21:23.129        "name": "BaseBdev4",
00:21:23.129        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:23.129        "is_configured": true,
00:21:23.129        "data_offset": 0,
00:21:23.129        "data_size": 65536
00:21:23.129      }
00:21:23.129    ]
00:21:23.129  }'
00:21:23.129   23:54:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:23.129   23:54:53	-- common/autotest_common.sh@10 -- # set +x
00:21:23.696    23:54:54	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:21:23.696    23:54:54	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:21:23.955  [2024-12-13 23:54:54.611159] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:21:23.955   23:54:54	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:21:23.955    23:54:54	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:23.955    23:54:54	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:21:24.213   23:54:54	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:21:24.213   23:54:54	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:21:24.213   23:54:54	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:21:24.213   23:54:54	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:21:24.213  [2024-12-13 23:54:54.914311] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:21:24.213  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:24.213  Zero copy mechanism will not be used.
00:21:24.214  Running I/O for 60 seconds...
00:21:24.472  [2024-12-13 23:54:54.976511] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:24.472  [2024-12-13 23:54:54.982733] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:24.472   23:54:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:24.472    23:54:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:24.472    23:54:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:24.730   23:54:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:24.730    "name": "raid_bdev1",
00:21:24.730    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:24.730    "strip_size_kb": 0,
00:21:24.730    "state": "online",
00:21:24.730    "raid_level": "raid1",
00:21:24.730    "superblock": false,
00:21:24.730    "num_base_bdevs": 4,
00:21:24.730    "num_base_bdevs_discovered": 3,
00:21:24.730    "num_base_bdevs_operational": 3,
00:21:24.730    "base_bdevs_list": [
00:21:24.730      {
00:21:24.730        "name": null,
00:21:24.730        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:24.730        "is_configured": false,
00:21:24.730        "data_offset": 0,
00:21:24.730        "data_size": 65536
00:21:24.730      },
00:21:24.730      {
00:21:24.730        "name": "BaseBdev2",
00:21:24.730        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:24.730        "is_configured": true,
00:21:24.730        "data_offset": 0,
00:21:24.730        "data_size": 65536
00:21:24.730      },
00:21:24.730      {
00:21:24.730        "name": "BaseBdev3",
00:21:24.730        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:24.730        "is_configured": true,
00:21:24.730        "data_offset": 0,
00:21:24.730        "data_size": 65536
00:21:24.730      },
00:21:24.730      {
00:21:24.730        "name": "BaseBdev4",
00:21:24.730        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:24.730        "is_configured": true,
00:21:24.730        "data_offset": 0,
00:21:24.730        "data_size": 65536
00:21:24.730      }
00:21:24.730    ]
00:21:24.730  }'
00:21:24.730   23:54:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:24.730   23:54:55	-- common/autotest_common.sh@10 -- # set +x
00:21:25.297   23:54:55	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:25.555  [2024-12-13 23:54:56.092966] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:25.555  [2024-12-13 23:54:56.093313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:25.555   23:54:56	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:21:25.555  [2024-12-13 23:54:56.143591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:21:25.555  [2024-12-13 23:54:56.145717] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:25.555  [2024-12-13 23:54:56.253982] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:25.555  [2024-12-13 23:54:56.254632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:25.814  [2024-12-13 23:54:56.377281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:26.073  [2024-12-13 23:54:56.734943] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:26.331  [2024-12-13 23:54:56.966835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:26.331  [2024-12-13 23:54:56.967968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:26.590   23:54:57	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:26.590   23:54:57	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:26.590   23:54:57	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:26.590   23:54:57	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:26.590   23:54:57	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:26.590    23:54:57	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:26.590    23:54:57	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:26.590  [2024-12-13 23:54:57.312989] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:26.849   23:54:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:26.849    "name": "raid_bdev1",
00:21:26.849    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:26.849    "strip_size_kb": 0,
00:21:26.849    "state": "online",
00:21:26.849    "raid_level": "raid1",
00:21:26.849    "superblock": false,
00:21:26.849    "num_base_bdevs": 4,
00:21:26.849    "num_base_bdevs_discovered": 4,
00:21:26.849    "num_base_bdevs_operational": 4,
00:21:26.849    "process": {
00:21:26.849      "type": "rebuild",
00:21:26.849      "target": "spare",
00:21:26.849      "progress": {
00:21:26.849        "blocks": 14336,
00:21:26.849        "percent": 21
00:21:26.849      }
00:21:26.849    },
00:21:26.849    "base_bdevs_list": [
00:21:26.849      {
00:21:26.849        "name": "spare",
00:21:26.849        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:26.849        "is_configured": true,
00:21:26.849        "data_offset": 0,
00:21:26.849        "data_size": 65536
00:21:26.849      },
00:21:26.849      {
00:21:26.849        "name": "BaseBdev2",
00:21:26.849        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:26.849        "is_configured": true,
00:21:26.849        "data_offset": 0,
00:21:26.849        "data_size": 65536
00:21:26.849      },
00:21:26.849      {
00:21:26.849        "name": "BaseBdev3",
00:21:26.849        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:26.849        "is_configured": true,
00:21:26.849        "data_offset": 0,
00:21:26.849        "data_size": 65536
00:21:26.849      },
00:21:26.849      {
00:21:26.849        "name": "BaseBdev4",
00:21:26.849        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:26.849        "is_configured": true,
00:21:26.849        "data_offset": 0,
00:21:26.849        "data_size": 65536
00:21:26.849      }
00:21:26.849    ]
00:21:26.849  }'
00:21:26.849    23:54:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:26.849   23:54:57	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:26.849    23:54:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:26.849   23:54:57	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:26.849   23:54:57	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:21:27.107  [2024-12-13 23:54:57.656575] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:21:27.107  [2024-12-13 23:54:57.741479] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:27.366  [2024-12-13 23:54:57.844003] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:21:27.366  [2024-12-13 23:54:57.845900] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:27.366  [2024-12-13 23:54:57.860835] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:27.366   23:54:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:27.366    23:54:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:27.366    23:54:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:27.625   23:54:58	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:27.625    "name": "raid_bdev1",
00:21:27.625    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:27.625    "strip_size_kb": 0,
00:21:27.625    "state": "online",
00:21:27.626    "raid_level": "raid1",
00:21:27.626    "superblock": false,
00:21:27.626    "num_base_bdevs": 4,
00:21:27.626    "num_base_bdevs_discovered": 3,
00:21:27.626    "num_base_bdevs_operational": 3,
00:21:27.626    "base_bdevs_list": [
00:21:27.626      {
00:21:27.626        "name": null,
00:21:27.626        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:27.626        "is_configured": false,
00:21:27.626        "data_offset": 0,
00:21:27.626        "data_size": 65536
00:21:27.626      },
00:21:27.626      {
00:21:27.626        "name": "BaseBdev2",
00:21:27.626        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:27.626        "is_configured": true,
00:21:27.626        "data_offset": 0,
00:21:27.626        "data_size": 65536
00:21:27.626      },
00:21:27.626      {
00:21:27.626        "name": "BaseBdev3",
00:21:27.626        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:27.626        "is_configured": true,
00:21:27.626        "data_offset": 0,
00:21:27.626        "data_size": 65536
00:21:27.626      },
00:21:27.626      {
00:21:27.626        "name": "BaseBdev4",
00:21:27.626        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:27.626        "is_configured": true,
00:21:27.626        "data_offset": 0,
00:21:27.626        "data_size": 65536
00:21:27.626      }
00:21:27.626    ]
00:21:27.626  }'
00:21:27.626   23:54:58	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:27.626   23:54:58	-- common/autotest_common.sh@10 -- # set +x
00:21:28.193   23:54:58	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:28.193   23:54:58	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:28.193   23:54:58	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:28.193   23:54:58	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:28.193   23:54:58	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:28.193    23:54:58	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:28.193    23:54:58	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:28.452   23:54:59	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:28.452    "name": "raid_bdev1",
00:21:28.452    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:28.452    "strip_size_kb": 0,
00:21:28.452    "state": "online",
00:21:28.452    "raid_level": "raid1",
00:21:28.452    "superblock": false,
00:21:28.452    "num_base_bdevs": 4,
00:21:28.452    "num_base_bdevs_discovered": 3,
00:21:28.452    "num_base_bdevs_operational": 3,
00:21:28.452    "base_bdevs_list": [
00:21:28.452      {
00:21:28.452        "name": null,
00:21:28.452        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:28.452        "is_configured": false,
00:21:28.452        "data_offset": 0,
00:21:28.452        "data_size": 65536
00:21:28.452      },
00:21:28.452      {
00:21:28.452        "name": "BaseBdev2",
00:21:28.452        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:28.452        "is_configured": true,
00:21:28.452        "data_offset": 0,
00:21:28.452        "data_size": 65536
00:21:28.452      },
00:21:28.452      {
00:21:28.452        "name": "BaseBdev3",
00:21:28.452        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:28.452        "is_configured": true,
00:21:28.452        "data_offset": 0,
00:21:28.452        "data_size": 65536
00:21:28.452      },
00:21:28.452      {
00:21:28.452        "name": "BaseBdev4",
00:21:28.452        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:28.452        "is_configured": true,
00:21:28.452        "data_offset": 0,
00:21:28.452        "data_size": 65536
00:21:28.452      }
00:21:28.452    ]
00:21:28.452  }'
00:21:28.452    23:54:59	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:28.452   23:54:59	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:28.452    23:54:59	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:28.452   23:54:59	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:28.452   23:54:59	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:28.711  [2024-12-13 23:54:59.377609] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:28.711  [2024-12-13 23:54:59.377701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:28.711   23:54:59	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:21:28.711  [2024-12-13 23:54:59.433718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:21:28.711  [2024-12-13 23:54:59.435642] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:28.969  [2024-12-13 23:54:59.567049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:28.969  [2024-12-13 23:54:59.568512] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:29.227  [2024-12-13 23:54:59.791867] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:29.227  [2024-12-13 23:54:59.792127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:29.794  [2024-12-13 23:55:00.381550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:29.794  [2024-12-13 23:55:00.382084] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:29.794   23:55:00	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:29.794   23:55:00	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:29.794   23:55:00	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:29.794   23:55:00	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:29.794   23:55:00	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:29.794    23:55:00	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:29.794    23:55:00	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:29.794  [2024-12-13 23:55:00.486540] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:29.794  [2024-12-13 23:55:00.486846] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:30.053    "name": "raid_bdev1",
00:21:30.053    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:30.053    "strip_size_kb": 0,
00:21:30.053    "state": "online",
00:21:30.053    "raid_level": "raid1",
00:21:30.053    "superblock": false,
00:21:30.053    "num_base_bdevs": 4,
00:21:30.053    "num_base_bdevs_discovered": 4,
00:21:30.053    "num_base_bdevs_operational": 4,
00:21:30.053    "process": {
00:21:30.053      "type": "rebuild",
00:21:30.053      "target": "spare",
00:21:30.053      "progress": {
00:21:30.053        "blocks": 18432,
00:21:30.053        "percent": 28
00:21:30.053      }
00:21:30.053    },
00:21:30.053    "base_bdevs_list": [
00:21:30.053      {
00:21:30.053        "name": "spare",
00:21:30.053        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:30.053        "is_configured": true,
00:21:30.053        "data_offset": 0,
00:21:30.053        "data_size": 65536
00:21:30.053      },
00:21:30.053      {
00:21:30.053        "name": "BaseBdev2",
00:21:30.053        "uuid": "ea4b3246-0405-439a-8047-58ab4885d667",
00:21:30.053        "is_configured": true,
00:21:30.053        "data_offset": 0,
00:21:30.053        "data_size": 65536
00:21:30.053      },
00:21:30.053      {
00:21:30.053        "name": "BaseBdev3",
00:21:30.053        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:30.053        "is_configured": true,
00:21:30.053        "data_offset": 0,
00:21:30.053        "data_size": 65536
00:21:30.053      },
00:21:30.053      {
00:21:30.053        "name": "BaseBdev4",
00:21:30.053        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:30.053        "is_configured": true,
00:21:30.053        "data_offset": 0,
00:21:30.053        "data_size": 65536
00:21:30.053      }
00:21:30.053    ]
00:21:30.053  }'
00:21:30.053    23:55:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:30.053  [2024-12-13 23:55:00.725653] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:30.053  [2024-12-13 23:55:00.726173] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:30.053    23:55:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:21:30.053   23:55:00	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:21:30.312  [2024-12-13 23:55:00.934459] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:21:30.312  [2024-12-13 23:55:00.934866] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:21:30.312  [2024-12-13 23:55:00.990617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:30.571  [2024-12-13 23:55:01.168115] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00
00:21:30.571  [2024-12-13 23:55:01.168158] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:30.571   23:55:01	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:30.571    23:55:01	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:30.571    23:55:01	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:30.571  [2024-12-13 23:55:01.294951] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:30.829    "name": "raid_bdev1",
00:21:30.829    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:30.829    "strip_size_kb": 0,
00:21:30.829    "state": "online",
00:21:30.829    "raid_level": "raid1",
00:21:30.829    "superblock": false,
00:21:30.829    "num_base_bdevs": 4,
00:21:30.829    "num_base_bdevs_discovered": 3,
00:21:30.829    "num_base_bdevs_operational": 3,
00:21:30.829    "process": {
00:21:30.829      "type": "rebuild",
00:21:30.829      "target": "spare",
00:21:30.829      "progress": {
00:21:30.829        "blocks": 26624,
00:21:30.829        "percent": 40
00:21:30.829      }
00:21:30.829    },
00:21:30.829    "base_bdevs_list": [
00:21:30.829      {
00:21:30.829        "name": "spare",
00:21:30.829        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:30.829        "is_configured": true,
00:21:30.829        "data_offset": 0,
00:21:30.829        "data_size": 65536
00:21:30.829      },
00:21:30.829      {
00:21:30.829        "name": null,
00:21:30.829        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:30.829        "is_configured": false,
00:21:30.829        "data_offset": 0,
00:21:30.829        "data_size": 65536
00:21:30.829      },
00:21:30.829      {
00:21:30.829        "name": "BaseBdev3",
00:21:30.829        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:30.829        "is_configured": true,
00:21:30.829        "data_offset": 0,
00:21:30.829        "data_size": 65536
00:21:30.829      },
00:21:30.829      {
00:21:30.829        "name": "BaseBdev4",
00:21:30.829        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:30.829        "is_configured": true,
00:21:30.829        "data_offset": 0,
00:21:30.829        "data_size": 65536
00:21:30.829      }
00:21:30.829    ]
00:21:30.829  }'
00:21:30.829    23:55:01	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:30.829    23:55:01	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:30.829  [2024-12-13 23:55:01.518507] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:21:30.829  [2024-12-13 23:55:01.518830] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@657 -- # local timeout=512
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:30.829   23:55:01	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:30.830   23:55:01	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:30.830   23:55:01	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:30.830   23:55:01	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:30.830    23:55:01	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:30.830    23:55:01	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:31.396   23:55:01	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:31.397    "name": "raid_bdev1",
00:21:31.397    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:31.397    "strip_size_kb": 0,
00:21:31.397    "state": "online",
00:21:31.397    "raid_level": "raid1",
00:21:31.397    "superblock": false,
00:21:31.397    "num_base_bdevs": 4,
00:21:31.397    "num_base_bdevs_discovered": 3,
00:21:31.397    "num_base_bdevs_operational": 3,
00:21:31.397    "process": {
00:21:31.397      "type": "rebuild",
00:21:31.397      "target": "spare",
00:21:31.397      "progress": {
00:21:31.397        "blocks": 30720,
00:21:31.397        "percent": 46
00:21:31.397      }
00:21:31.397    },
00:21:31.397    "base_bdevs_list": [
00:21:31.397      {
00:21:31.397        "name": "spare",
00:21:31.397        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:31.397        "is_configured": true,
00:21:31.397        "data_offset": 0,
00:21:31.397        "data_size": 65536
00:21:31.397      },
00:21:31.397      {
00:21:31.397        "name": null,
00:21:31.397        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:31.397        "is_configured": false,
00:21:31.397        "data_offset": 0,
00:21:31.397        "data_size": 65536
00:21:31.397      },
00:21:31.397      {
00:21:31.397        "name": "BaseBdev3",
00:21:31.397        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:31.397        "is_configured": true,
00:21:31.397        "data_offset": 0,
00:21:31.397        "data_size": 65536
00:21:31.397      },
00:21:31.397      {
00:21:31.397        "name": "BaseBdev4",
00:21:31.397        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:31.397        "is_configured": true,
00:21:31.397        "data_offset": 0,
00:21:31.397        "data_size": 65536
00:21:31.397      }
00:21:31.397    ]
00:21:31.397  }'
00:21:31.397    23:55:01	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:31.397   23:55:01	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:31.397    23:55:01	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:31.397   23:55:01	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:31.397   23:55:01	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:31.397  [2024-12-13 23:55:01.959197] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:21:31.963  [2024-12-13 23:55:02.580708] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152
00:21:32.222  [2024-12-13 23:55:02.702699] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:32.222   23:55:02	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:32.222    23:55:02	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:32.222    23:55:02	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:32.482  [2024-12-13 23:55:03.021456] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296
00:21:32.482   23:55:03	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:32.482    "name": "raid_bdev1",
00:21:32.482    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:32.482    "strip_size_kb": 0,
00:21:32.482    "state": "online",
00:21:32.482    "raid_level": "raid1",
00:21:32.482    "superblock": false,
00:21:32.482    "num_base_bdevs": 4,
00:21:32.482    "num_base_bdevs_discovered": 3,
00:21:32.482    "num_base_bdevs_operational": 3,
00:21:32.482    "process": {
00:21:32.482      "type": "rebuild",
00:21:32.482      "target": "spare",
00:21:32.482      "progress": {
00:21:32.482        "blocks": 51200,
00:21:32.482        "percent": 78
00:21:32.482      }
00:21:32.482    },
00:21:32.482    "base_bdevs_list": [
00:21:32.482      {
00:21:32.482        "name": "spare",
00:21:32.482        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:32.482        "is_configured": true,
00:21:32.482        "data_offset": 0,
00:21:32.482        "data_size": 65536
00:21:32.482      },
00:21:32.482      {
00:21:32.482        "name": null,
00:21:32.482        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:32.482        "is_configured": false,
00:21:32.482        "data_offset": 0,
00:21:32.482        "data_size": 65536
00:21:32.482      },
00:21:32.482      {
00:21:32.482        "name": "BaseBdev3",
00:21:32.482        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:32.482        "is_configured": true,
00:21:32.482        "data_offset": 0,
00:21:32.482        "data_size": 65536
00:21:32.482      },
00:21:32.482      {
00:21:32.482        "name": "BaseBdev4",
00:21:32.482        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:32.482        "is_configured": true,
00:21:32.482        "data_offset": 0,
00:21:32.482        "data_size": 65536
00:21:32.482      }
00:21:32.482    ]
00:21:32.482  }'
00:21:32.482    23:55:03	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:32.741   23:55:03	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:32.741    23:55:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:32.741  [2024-12-13 23:55:03.243986] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296
00:21:32.741   23:55:03	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:32.741   23:55:03	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:33.309  [2024-12-13 23:55:03.898477] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:21:33.309  [2024-12-13 23:55:03.995955] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:21:33.309  [2024-12-13 23:55:03.998219] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:33.568   23:55:04	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:33.568    23:55:04	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:33.568    23:55:04	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:33.827   23:55:04	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:33.827    "name": "raid_bdev1",
00:21:33.827    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:33.827    "strip_size_kb": 0,
00:21:33.827    "state": "online",
00:21:33.827    "raid_level": "raid1",
00:21:33.827    "superblock": false,
00:21:33.827    "num_base_bdevs": 4,
00:21:33.827    "num_base_bdevs_discovered": 3,
00:21:33.827    "num_base_bdevs_operational": 3,
00:21:33.827    "base_bdevs_list": [
00:21:33.827      {
00:21:33.827        "name": "spare",
00:21:33.827        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:33.827        "is_configured": true,
00:21:33.827        "data_offset": 0,
00:21:33.827        "data_size": 65536
00:21:33.827      },
00:21:33.827      {
00:21:33.827        "name": null,
00:21:33.827        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:33.827        "is_configured": false,
00:21:33.827        "data_offset": 0,
00:21:33.827        "data_size": 65536
00:21:33.827      },
00:21:33.827      {
00:21:33.827        "name": "BaseBdev3",
00:21:33.827        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:33.827        "is_configured": true,
00:21:33.827        "data_offset": 0,
00:21:33.827        "data_size": 65536
00:21:33.827      },
00:21:33.827      {
00:21:33.827        "name": "BaseBdev4",
00:21:33.827        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:33.827        "is_configured": true,
00:21:33.827        "data_offset": 0,
00:21:33.827        "data_size": 65536
00:21:33.827      }
00:21:33.827    ]
00:21:33.827  }'
00:21:33.827    23:55:04	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:21:34.098    23:55:04	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@660 -- # break
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:34.098   23:55:04	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:34.098    23:55:04	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:34.098    23:55:04	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:34.370    "name": "raid_bdev1",
00:21:34.370    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:34.370    "strip_size_kb": 0,
00:21:34.370    "state": "online",
00:21:34.370    "raid_level": "raid1",
00:21:34.370    "superblock": false,
00:21:34.370    "num_base_bdevs": 4,
00:21:34.370    "num_base_bdevs_discovered": 3,
00:21:34.370    "num_base_bdevs_operational": 3,
00:21:34.370    "base_bdevs_list": [
00:21:34.370      {
00:21:34.370        "name": "spare",
00:21:34.370        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:34.370        "is_configured": true,
00:21:34.370        "data_offset": 0,
00:21:34.370        "data_size": 65536
00:21:34.370      },
00:21:34.370      {
00:21:34.370        "name": null,
00:21:34.370        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:34.370        "is_configured": false,
00:21:34.370        "data_offset": 0,
00:21:34.370        "data_size": 65536
00:21:34.370      },
00:21:34.370      {
00:21:34.370        "name": "BaseBdev3",
00:21:34.370        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:34.370        "is_configured": true,
00:21:34.370        "data_offset": 0,
00:21:34.370        "data_size": 65536
00:21:34.370      },
00:21:34.370      {
00:21:34.370        "name": "BaseBdev4",
00:21:34.370        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:34.370        "is_configured": true,
00:21:34.370        "data_offset": 0,
00:21:34.370        "data_size": 65536
00:21:34.370      }
00:21:34.370    ]
00:21:34.370  }'
00:21:34.370    23:55:04	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:34.370    23:55:04	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:34.370   23:55:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:34.370    23:55:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:34.370    23:55:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:34.628   23:55:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:34.628    "name": "raid_bdev1",
00:21:34.628    "uuid": "da225ad8-c5fa-4c2d-af63-60a5b2444b8a",
00:21:34.628    "strip_size_kb": 0,
00:21:34.628    "state": "online",
00:21:34.628    "raid_level": "raid1",
00:21:34.628    "superblock": false,
00:21:34.628    "num_base_bdevs": 4,
00:21:34.628    "num_base_bdevs_discovered": 3,
00:21:34.628    "num_base_bdevs_operational": 3,
00:21:34.628    "base_bdevs_list": [
00:21:34.628      {
00:21:34.628        "name": "spare",
00:21:34.628        "uuid": "64f82f9e-e774-5249-bba9-3a3dab19030a",
00:21:34.628        "is_configured": true,
00:21:34.628        "data_offset": 0,
00:21:34.628        "data_size": 65536
00:21:34.628      },
00:21:34.628      {
00:21:34.628        "name": null,
00:21:34.628        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:34.628        "is_configured": false,
00:21:34.628        "data_offset": 0,
00:21:34.628        "data_size": 65536
00:21:34.628      },
00:21:34.628      {
00:21:34.628        "name": "BaseBdev3",
00:21:34.628        "uuid": "aa22be13-d914-40de-b72f-bd50e41147b4",
00:21:34.628        "is_configured": true,
00:21:34.628        "data_offset": 0,
00:21:34.628        "data_size": 65536
00:21:34.628      },
00:21:34.628      {
00:21:34.628        "name": "BaseBdev4",
00:21:34.628        "uuid": "634beb73-391a-4ea9-a758-7237286e6918",
00:21:34.628        "is_configured": true,
00:21:34.628        "data_offset": 0,
00:21:34.628        "data_size": 65536
00:21:34.628      }
00:21:34.628    ]
00:21:34.628  }'
00:21:34.628   23:55:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:34.628   23:55:05	-- common/autotest_common.sh@10 -- # set +x
00:21:35.196   23:55:05	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:21:35.454  [2024-12-13 23:55:06.049884] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:35.454  [2024-12-13 23:55:06.049928] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:35.454  
00:21:35.454                                                                                                  Latency(us)
00:21:35.454  
[2024-12-13T23:55:06.186Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:35.454  
[2024-12-13T23:55:06.186Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:21:35.454  	 raid_bdev1          :      11.23     104.07     312.20       0.00     0.00   13153.52     279.27  117726.49
00:21:35.454  
[2024-12-13T23:55:06.186Z]  ===================================================================================================================
00:21:35.454  
[2024-12-13T23:55:06.186Z]  Total                       :                104.07     312.20       0.00     0.00   13153.52     279.27  117726.49
00:21:35.455  [2024-12-13 23:55:06.164637] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:35.455  [2024-12-13 23:55:06.164677] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:35.455  [2024-12-13 23:55:06.164757] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:35.455  [2024-12-13 23:55:06.164770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:21:35.455  0
00:21:35.713    23:55:06	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:35.713    23:55:06	-- bdev/bdev_raid.sh@671 -- # jq length
00:21:35.713   23:55:06	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:21:35.713   23:55:06	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:21:35.713   23:55:06	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@12 -- # local i
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:35.713   23:55:06	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:21:35.972  /dev/nbd0
00:21:35.972    23:55:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:21:36.230   23:55:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:21:36.230   23:55:06	-- common/autotest_common.sh@867 -- # local i
00:21:36.230   23:55:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:36.230   23:55:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:36.230   23:55:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:21:36.230   23:55:06	-- common/autotest_common.sh@871 -- # break
00:21:36.230   23:55:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:36.230   23:55:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:36.230   23:55:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:36.230  1+0 records in
00:21:36.230  1+0 records out
00:21:36.230  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374 s, 11.0 MB/s
00:21:36.230    23:55:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:36.230   23:55:06	-- common/autotest_common.sh@884 -- # size=4096
00:21:36.230   23:55:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:36.230   23:55:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:36.230   23:55:06	-- common/autotest_common.sh@887 -- # return 0
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@677 -- # '[' -z '' ']'
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@678 -- # continue
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']'
00:21:36.230   23:55:06	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@12 -- # local i
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:36.230   23:55:06	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:21:36.489  /dev/nbd1
00:21:36.489    23:55:07	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:36.489   23:55:07	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:36.489   23:55:07	-- common/autotest_common.sh@867 -- # local i
00:21:36.489   23:55:07	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:36.489   23:55:07	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:36.489   23:55:07	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:36.489   23:55:07	-- common/autotest_common.sh@871 -- # break
00:21:36.489   23:55:07	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:36.489   23:55:07	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:36.489   23:55:07	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:36.489  1+0 records in
00:21:36.489  1+0 records out
00:21:36.489  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494462 s, 8.3 MB/s
00:21:36.489    23:55:07	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:36.489   23:55:07	-- common/autotest_common.sh@884 -- # size=4096
00:21:36.489   23:55:07	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:36.489   23:55:07	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:36.489   23:55:07	-- common/autotest_common.sh@887 -- # return 0
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:36.489   23:55:07	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:21:36.489   23:55:07	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@51 -- # local i
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:36.489   23:55:07	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:36.748    23:55:07	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@41 -- # break
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@45 -- # return 0
00:21:36.748   23:55:07	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:36.748   23:55:07	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']'
00:21:36.748   23:55:07	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@12 -- # local i
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:36.748   23:55:07	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:21:37.006  /dev/nbd1
00:21:37.006    23:55:07	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:37.006   23:55:07	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:37.006   23:55:07	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:37.006   23:55:07	-- common/autotest_common.sh@867 -- # local i
00:21:37.006   23:55:07	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:37.006   23:55:07	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:37.006   23:55:07	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:37.006   23:55:07	-- common/autotest_common.sh@871 -- # break
00:21:37.006   23:55:07	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:37.006   23:55:07	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:37.006   23:55:07	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:37.006  1+0 records in
00:21:37.006  1+0 records out
00:21:37.006  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290692 s, 14.1 MB/s
00:21:37.006    23:55:07	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:37.006   23:55:07	-- common/autotest_common.sh@884 -- # size=4096
00:21:37.006   23:55:07	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:37.006   23:55:07	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:37.006   23:55:07	-- common/autotest_common.sh@887 -- # return 0
00:21:37.006   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:37.006   23:55:07	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:37.006   23:55:07	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:21:37.264   23:55:07	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@51 -- # local i
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:37.264    23:55:07	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:37.264   23:55:07	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:37.265   23:55:07	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:37.265   23:55:07	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:37.265   23:55:07	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:37.265   23:55:07	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@41 -- # break
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@45 -- # return 0
00:21:37.523   23:55:07	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:37.523   23:55:07	-- bdev/nbd_common.sh@51 -- # local i
00:21:37.523   23:55:08	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:37.523   23:55:08	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:37.782    23:55:08	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@41 -- # break
00:21:37.782   23:55:08	-- bdev/nbd_common.sh@45 -- # return 0
00:21:37.782   23:55:08	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:21:37.782   23:55:08	-- bdev/bdev_raid.sh@709 -- # killprocess 125661
00:21:37.782   23:55:08	-- common/autotest_common.sh@936 -- # '[' -z 125661 ']'
00:21:37.782   23:55:08	-- common/autotest_common.sh@940 -- # kill -0 125661
00:21:37.782    23:55:08	-- common/autotest_common.sh@941 -- # uname
00:21:37.782   23:55:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:37.782    23:55:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125661
00:21:37.782   23:55:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:37.782   23:55:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:37.782   23:55:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 125661'
00:21:37.782  killing process with pid 125661
00:21:37.782   23:55:08	-- common/autotest_common.sh@955 -- # kill 125661
00:21:37.782  Received shutdown signal, test time was about 13.377356 seconds
00:21:37.782  
00:21:37.782                                                                                                  Latency(us)
00:21:37.782  
[2024-12-13T23:55:08.514Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:37.782  
[2024-12-13T23:55:08.514Z]  ===================================================================================================================
00:21:37.782  
[2024-12-13T23:55:08.514Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:37.782  [2024-12-13 23:55:08.294037] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:21:37.782   23:55:08	-- common/autotest_common.sh@960 -- # wait 125661
00:21:38.042  [2024-12-13 23:55:08.572525] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:21:38.978  ************************************
00:21:38.978  END TEST raid_rebuild_test_io
00:21:38.978  ************************************
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@711 -- # return 0
00:21:38.978  
00:21:38.978  real	0m18.726s
00:21:38.978  user	0m28.902s
00:21:38.978  sys	0m2.265s
00:21:38.978   23:55:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:38.978   23:55:09	-- common/autotest_common.sh@10 -- # set +x
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true
00:21:38.978   23:55:09	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:21:38.978   23:55:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:38.978   23:55:09	-- common/autotest_common.sh@10 -- # set +x
00:21:38.978  ************************************
00:21:38.978  START TEST raid_rebuild_test_sb_io
00:21:38.978  ************************************
00:21:38.978   23:55:09	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:38.978    23:55:09	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@544 -- # raid_pid=126166
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@545 -- # waitforlisten 126166 /var/tmp/spdk-raid.sock
00:21:38.978   23:55:09	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:21:38.978   23:55:09	-- common/autotest_common.sh@829 -- # '[' -z 126166 ']'
00:21:38.978   23:55:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:21:38.978   23:55:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:38.978   23:55:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:21:38.978  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:21:38.978   23:55:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:38.978   23:55:09	-- common/autotest_common.sh@10 -- # set +x
00:21:38.978  [2024-12-13 23:55:09.687801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:38.978  [2024-12-13 23:55:09.688006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126166 ]
00:21:38.978  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:38.978  Zero copy mechanism will not be used.
00:21:39.237  [2024-12-13 23:55:09.850045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:39.495  [2024-12-13 23:55:10.028484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:39.495  [2024-12-13 23:55:10.199405] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:21:40.062   23:55:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:40.062   23:55:10	-- common/autotest_common.sh@862 -- # return 0
00:21:40.062   23:55:10	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:40.062   23:55:10	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:40.062   23:55:10	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:21:40.320  BaseBdev1_malloc
00:21:40.320   23:55:10	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:21:40.579  [2024-12-13 23:55:11.087822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:21:40.579  [2024-12-13 23:55:11.087906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:40.579  [2024-12-13 23:55:11.087939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:21:40.579  [2024-12-13 23:55:11.087983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:40.579  [2024-12-13 23:55:11.090248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:40.579  [2024-12-13 23:55:11.090294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:21:40.579  BaseBdev1
00:21:40.579   23:55:11	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:40.579   23:55:11	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:40.579   23:55:11	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:21:40.837  BaseBdev2_malloc
00:21:40.837   23:55:11	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:21:41.095  [2024-12-13 23:55:11.607237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:21:41.095  [2024-12-13 23:55:11.607303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:41.095  [2024-12-13 23:55:11.607345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:21:41.095  [2024-12-13 23:55:11.607405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:41.095  [2024-12-13 23:55:11.609479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:41.095  [2024-12-13 23:55:11.609527] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:21:41.095  BaseBdev2
00:21:41.095   23:55:11	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:41.095   23:55:11	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:41.095   23:55:11	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:21:41.095  BaseBdev3_malloc
00:21:41.353   23:55:11	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:21:41.353  [2024-12-13 23:55:12.008601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:21:41.353  [2024-12-13 23:55:12.008673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:41.353  [2024-12-13 23:55:12.008712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:21:41.353  [2024-12-13 23:55:12.008754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:41.353  [2024-12-13 23:55:12.011006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:41.353  [2024-12-13 23:55:12.011060] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:21:41.353  BaseBdev3
00:21:41.353   23:55:12	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:41.353   23:55:12	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:41.353   23:55:12	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:21:41.611  BaseBdev4_malloc
00:21:41.611   23:55:12	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:21:41.870  [2024-12-13 23:55:12.487516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:21:41.870  [2024-12-13 23:55:12.487596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:41.870  [2024-12-13 23:55:12.487631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:21:41.870  [2024-12-13 23:55:12.487676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:41.870  [2024-12-13 23:55:12.489811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:41.870  [2024-12-13 23:55:12.489864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:21:41.870  BaseBdev4
00:21:41.870   23:55:12	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:21:42.128  spare_malloc
00:21:42.128   23:55:12	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:21:42.386  spare_delay
00:21:42.386   23:55:12	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:42.644  [2024-12-13 23:55:13.124813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:42.644  [2024-12-13 23:55:13.124889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:42.644  [2024-12-13 23:55:13.124922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:21:42.644  [2024-12-13 23:55:13.124964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:42.644  [2024-12-13 23:55:13.127039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:42.644  [2024-12-13 23:55:13.127097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:42.644  spare
00:21:42.644   23:55:13	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:21:42.644  [2024-12-13 23:55:13.308921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:42.644  [2024-12-13 23:55:13.310695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:21:42.644  [2024-12-13 23:55:13.310779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:42.644  [2024-12-13 23:55:13.310835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:42.644  [2024-12-13 23:55:13.311015] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:21:42.644  [2024-12-13 23:55:13.311027] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:21:42.644  [2024-12-13 23:55:13.311127] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:21:42.644  [2024-12-13 23:55:13.311492] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:21:42.644  [2024-12-13 23:55:13.311515] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:21:42.644  [2024-12-13 23:55:13.311647] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:42.645   23:55:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:42.645    23:55:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:42.645    23:55:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:42.903   23:55:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:42.903    "name": "raid_bdev1",
00:21:42.903    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:42.903    "strip_size_kb": 0,
00:21:42.903    "state": "online",
00:21:42.903    "raid_level": "raid1",
00:21:42.903    "superblock": true,
00:21:42.903    "num_base_bdevs": 4,
00:21:42.903    "num_base_bdevs_discovered": 4,
00:21:42.903    "num_base_bdevs_operational": 4,
00:21:42.903    "base_bdevs_list": [
00:21:42.903      {
00:21:42.903        "name": "BaseBdev1",
00:21:42.903        "uuid": "8f54a490-2905-5fa1-bf56-67d47805c7c1",
00:21:42.903        "is_configured": true,
00:21:42.903        "data_offset": 2048,
00:21:42.903        "data_size": 63488
00:21:42.903      },
00:21:42.903      {
00:21:42.903        "name": "BaseBdev2",
00:21:42.903        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:42.903        "is_configured": true,
00:21:42.903        "data_offset": 2048,
00:21:42.903        "data_size": 63488
00:21:42.903      },
00:21:42.903      {
00:21:42.903        "name": "BaseBdev3",
00:21:42.903        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:42.903        "is_configured": true,
00:21:42.903        "data_offset": 2048,
00:21:42.903        "data_size": 63488
00:21:42.903      },
00:21:42.903      {
00:21:42.903        "name": "BaseBdev4",
00:21:42.903        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:42.903        "is_configured": true,
00:21:42.903        "data_offset": 2048,
00:21:42.903        "data_size": 63488
00:21:42.903      }
00:21:42.903    ]
00:21:42.903  }'
00:21:42.903   23:55:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:42.903   23:55:13	-- common/autotest_common.sh@10 -- # set +x
00:21:43.470    23:55:14	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:21:43.470    23:55:14	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:21:43.728  [2024-12-13 23:55:14.277215] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:21:43.728   23:55:14	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:21:43.728    23:55:14	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:43.728    23:55:14	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:21:43.986  [2024-12-13 23:55:14.547916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:21:43.986  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:43.986  Zero copy mechanism will not be used.
00:21:43.986  Running I/O for 60 seconds...
00:21:43.986  [2024-12-13 23:55:14.638874] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:43.986  [2024-12-13 23:55:14.650844] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:43.986   23:55:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:43.986    23:55:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:43.986    23:55:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:44.244   23:55:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:44.244    "name": "raid_bdev1",
00:21:44.244    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:44.244    "strip_size_kb": 0,
00:21:44.244    "state": "online",
00:21:44.244    "raid_level": "raid1",
00:21:44.244    "superblock": true,
00:21:44.244    "num_base_bdevs": 4,
00:21:44.244    "num_base_bdevs_discovered": 3,
00:21:44.244    "num_base_bdevs_operational": 3,
00:21:44.244    "base_bdevs_list": [
00:21:44.244      {
00:21:44.244        "name": null,
00:21:44.244        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:44.244        "is_configured": false,
00:21:44.244        "data_offset": 2048,
00:21:44.244        "data_size": 63488
00:21:44.244      },
00:21:44.244      {
00:21:44.244        "name": "BaseBdev2",
00:21:44.244        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:44.244        "is_configured": true,
00:21:44.244        "data_offset": 2048,
00:21:44.244        "data_size": 63488
00:21:44.244      },
00:21:44.244      {
00:21:44.244        "name": "BaseBdev3",
00:21:44.244        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:44.244        "is_configured": true,
00:21:44.244        "data_offset": 2048,
00:21:44.244        "data_size": 63488
00:21:44.244      },
00:21:44.244      {
00:21:44.244        "name": "BaseBdev4",
00:21:44.244        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:44.244        "is_configured": true,
00:21:44.244        "data_offset": 2048,
00:21:44.244        "data_size": 63488
00:21:44.244      }
00:21:44.244    ]
00:21:44.244  }'
00:21:44.244   23:55:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:44.244   23:55:14	-- common/autotest_common.sh@10 -- # set +x
00:21:44.813   23:55:15	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:45.072  [2024-12-13 23:55:15.671727] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:45.072  [2024-12-13 23:55:15.671799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:45.072   23:55:15	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:21:45.072  [2024-12-13 23:55:15.733166] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:21:45.072  [2024-12-13 23:55:15.734967] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:45.330  [2024-12-13 23:55:15.857932] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:45.330  [2024-12-13 23:55:15.990223] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:45.330  [2024-12-13 23:55:15.990476] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:45.898  [2024-12-13 23:55:16.343477] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:45.898  [2024-12-13 23:55:16.344607] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:45.898  [2024-12-13 23:55:16.552531] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:45.898  [2024-12-13 23:55:16.552877] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:46.156   23:55:16	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:46.156   23:55:16	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:46.156   23:55:16	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:46.156   23:55:16	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:46.156   23:55:16	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:46.156    23:55:16	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:46.156    23:55:16	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:46.415  [2024-12-13 23:55:16.992274] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:46.415   23:55:17	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:46.415    "name": "raid_bdev1",
00:21:46.415    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:46.415    "strip_size_kb": 0,
00:21:46.415    "state": "online",
00:21:46.415    "raid_level": "raid1",
00:21:46.415    "superblock": true,
00:21:46.415    "num_base_bdevs": 4,
00:21:46.415    "num_base_bdevs_discovered": 4,
00:21:46.415    "num_base_bdevs_operational": 4,
00:21:46.415    "process": {
00:21:46.415      "type": "rebuild",
00:21:46.415      "target": "spare",
00:21:46.415      "progress": {
00:21:46.415        "blocks": 16384,
00:21:46.415        "percent": 25
00:21:46.415      }
00:21:46.415    },
00:21:46.415    "base_bdevs_list": [
00:21:46.415      {
00:21:46.415        "name": "spare",
00:21:46.415        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:46.415        "is_configured": true,
00:21:46.415        "data_offset": 2048,
00:21:46.415        "data_size": 63488
00:21:46.415      },
00:21:46.415      {
00:21:46.415        "name": "BaseBdev2",
00:21:46.415        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:46.415        "is_configured": true,
00:21:46.415        "data_offset": 2048,
00:21:46.415        "data_size": 63488
00:21:46.415      },
00:21:46.415      {
00:21:46.415        "name": "BaseBdev3",
00:21:46.415        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:46.415        "is_configured": true,
00:21:46.415        "data_offset": 2048,
00:21:46.415        "data_size": 63488
00:21:46.415      },
00:21:46.415      {
00:21:46.415        "name": "BaseBdev4",
00:21:46.415        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:46.415        "is_configured": true,
00:21:46.415        "data_offset": 2048,
00:21:46.415        "data_size": 63488
00:21:46.415      }
00:21:46.415    ]
00:21:46.415  }'
00:21:46.415    23:55:17	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:46.415   23:55:17	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:46.415    23:55:17	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:46.415   23:55:17	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:46.415   23:55:17	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:21:46.674  [2024-12-13 23:55:17.235650] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:46.674  [2024-12-13 23:55:17.358042] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:21:46.674  [2024-12-13 23:55:17.360250] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:21:46.933  [2024-12-13 23:55:17.594083] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:21:46.933  [2024-12-13 23:55:17.611334] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:46.933  [2024-12-13 23:55:17.643103] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:47.191   23:55:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:47.191    23:55:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:47.191    23:55:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:47.450   23:55:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:47.450    "name": "raid_bdev1",
00:21:47.450    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:47.450    "strip_size_kb": 0,
00:21:47.450    "state": "online",
00:21:47.450    "raid_level": "raid1",
00:21:47.450    "superblock": true,
00:21:47.450    "num_base_bdevs": 4,
00:21:47.450    "num_base_bdevs_discovered": 3,
00:21:47.450    "num_base_bdevs_operational": 3,
00:21:47.450    "base_bdevs_list": [
00:21:47.450      {
00:21:47.450        "name": null,
00:21:47.450        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:47.450        "is_configured": false,
00:21:47.450        "data_offset": 2048,
00:21:47.450        "data_size": 63488
00:21:47.450      },
00:21:47.450      {
00:21:47.450        "name": "BaseBdev2",
00:21:47.450        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:47.450        "is_configured": true,
00:21:47.450        "data_offset": 2048,
00:21:47.450        "data_size": 63488
00:21:47.450      },
00:21:47.450      {
00:21:47.450        "name": "BaseBdev3",
00:21:47.450        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:47.450        "is_configured": true,
00:21:47.450        "data_offset": 2048,
00:21:47.450        "data_size": 63488
00:21:47.450      },
00:21:47.450      {
00:21:47.450        "name": "BaseBdev4",
00:21:47.450        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:47.450        "is_configured": true,
00:21:47.450        "data_offset": 2048,
00:21:47.450        "data_size": 63488
00:21:47.450      }
00:21:47.450    ]
00:21:47.450  }'
00:21:47.450   23:55:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:47.450   23:55:17	-- common/autotest_common.sh@10 -- # set +x
00:21:48.017   23:55:18	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:48.017   23:55:18	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:48.017   23:55:18	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:48.017   23:55:18	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:48.017   23:55:18	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:48.017    23:55:18	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:48.017    23:55:18	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:48.276   23:55:18	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:48.276    "name": "raid_bdev1",
00:21:48.276    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:48.276    "strip_size_kb": 0,
00:21:48.276    "state": "online",
00:21:48.276    "raid_level": "raid1",
00:21:48.276    "superblock": true,
00:21:48.276    "num_base_bdevs": 4,
00:21:48.276    "num_base_bdevs_discovered": 3,
00:21:48.276    "num_base_bdevs_operational": 3,
00:21:48.276    "base_bdevs_list": [
00:21:48.276      {
00:21:48.276        "name": null,
00:21:48.276        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:48.276        "is_configured": false,
00:21:48.276        "data_offset": 2048,
00:21:48.276        "data_size": 63488
00:21:48.276      },
00:21:48.276      {
00:21:48.276        "name": "BaseBdev2",
00:21:48.276        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:48.276        "is_configured": true,
00:21:48.276        "data_offset": 2048,
00:21:48.276        "data_size": 63488
00:21:48.276      },
00:21:48.276      {
00:21:48.276        "name": "BaseBdev3",
00:21:48.276        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:48.276        "is_configured": true,
00:21:48.276        "data_offset": 2048,
00:21:48.276        "data_size": 63488
00:21:48.276      },
00:21:48.276      {
00:21:48.276        "name": "BaseBdev4",
00:21:48.276        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:48.276        "is_configured": true,
00:21:48.276        "data_offset": 2048,
00:21:48.276        "data_size": 63488
00:21:48.276      }
00:21:48.276    ]
00:21:48.276  }'
00:21:48.276    23:55:18	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:48.276   23:55:18	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:48.276    23:55:18	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:48.276   23:55:18	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:48.276   23:55:18	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:48.535  [2024-12-13 23:55:19.217200] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:48.535  [2024-12-13 23:55:19.217253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:48.535   23:55:19	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:21:48.535  [2024-12-13 23:55:19.251382] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:21:48.535  [2024-12-13 23:55:19.253293] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:48.794  [2024-12-13 23:55:19.362431] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:48.794  [2024-12-13 23:55:19.363099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:48.794  [2024-12-13 23:55:19.471510] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:48.794  [2024-12-13 23:55:19.471703] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:49.362  [2024-12-13 23:55:19.932386] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:49.621   23:55:20	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:49.621   23:55:20	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:49.621   23:55:20	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:49.621   23:55:20	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:49.621   23:55:20	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:49.621    23:55:20	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:49.621    23:55:20	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:49.621  [2024-12-13 23:55:20.266025] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:49.621  [2024-12-13 23:55:20.266502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:49.880  [2024-12-13 23:55:20.483169] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:49.880  [2024-12-13 23:55:20.483493] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:49.880    "name": "raid_bdev1",
00:21:49.880    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:49.880    "strip_size_kb": 0,
00:21:49.880    "state": "online",
00:21:49.880    "raid_level": "raid1",
00:21:49.880    "superblock": true,
00:21:49.880    "num_base_bdevs": 4,
00:21:49.880    "num_base_bdevs_discovered": 4,
00:21:49.880    "num_base_bdevs_operational": 4,
00:21:49.880    "process": {
00:21:49.880      "type": "rebuild",
00:21:49.880      "target": "spare",
00:21:49.880      "progress": {
00:21:49.880        "blocks": 14336,
00:21:49.880        "percent": 22
00:21:49.880      }
00:21:49.880    },
00:21:49.880    "base_bdevs_list": [
00:21:49.880      {
00:21:49.880        "name": "spare",
00:21:49.880        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:49.880        "is_configured": true,
00:21:49.880        "data_offset": 2048,
00:21:49.880        "data_size": 63488
00:21:49.880      },
00:21:49.880      {
00:21:49.880        "name": "BaseBdev2",
00:21:49.880        "uuid": "40893ce5-8653-59f6-8dbd-618a470e7d9d",
00:21:49.880        "is_configured": true,
00:21:49.880        "data_offset": 2048,
00:21:49.880        "data_size": 63488
00:21:49.880      },
00:21:49.880      {
00:21:49.880        "name": "BaseBdev3",
00:21:49.880        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:49.880        "is_configured": true,
00:21:49.880        "data_offset": 2048,
00:21:49.880        "data_size": 63488
00:21:49.880      },
00:21:49.880      {
00:21:49.880        "name": "BaseBdev4",
00:21:49.880        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:49.880        "is_configured": true,
00:21:49.880        "data_offset": 2048,
00:21:49.880        "data_size": 63488
00:21:49.880      }
00:21:49.880    ]
00:21:49.880  }'
00:21:49.880    23:55:20	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:49.880    23:55:20	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:21:49.880  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:21:49.880   23:55:20	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:21:49.881   23:55:20	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:21:50.139  [2024-12-13 23:55:20.855381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:50.398  [2024-12-13 23:55:21.021462] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40
00:21:50.398  [2024-12-13 23:55:21.021500] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:50.656   23:55:21	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:50.656    23:55:21	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:50.656    23:55:21	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:50.656  [2024-12-13 23:55:21.246984] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:21:50.656  [2024-12-13 23:55:21.355686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:50.915    "name": "raid_bdev1",
00:21:50.915    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:50.915    "strip_size_kb": 0,
00:21:50.915    "state": "online",
00:21:50.915    "raid_level": "raid1",
00:21:50.915    "superblock": true,
00:21:50.915    "num_base_bdevs": 4,
00:21:50.915    "num_base_bdevs_discovered": 3,
00:21:50.915    "num_base_bdevs_operational": 3,
00:21:50.915    "process": {
00:21:50.915      "type": "rebuild",
00:21:50.915      "target": "spare",
00:21:50.915      "progress": {
00:21:50.915        "blocks": 28672,
00:21:50.915        "percent": 45
00:21:50.915      }
00:21:50.915    },
00:21:50.915    "base_bdevs_list": [
00:21:50.915      {
00:21:50.915        "name": "spare",
00:21:50.915        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:50.915        "is_configured": true,
00:21:50.915        "data_offset": 2048,
00:21:50.915        "data_size": 63488
00:21:50.915      },
00:21:50.915      {
00:21:50.915        "name": null,
00:21:50.915        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:50.915        "is_configured": false,
00:21:50.915        "data_offset": 2048,
00:21:50.915        "data_size": 63488
00:21:50.915      },
00:21:50.915      {
00:21:50.915        "name": "BaseBdev3",
00:21:50.915        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:50.915        "is_configured": true,
00:21:50.915        "data_offset": 2048,
00:21:50.915        "data_size": 63488
00:21:50.915      },
00:21:50.915      {
00:21:50.915        "name": "BaseBdev4",
00:21:50.915        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:50.915        "is_configured": true,
00:21:50.915        "data_offset": 2048,
00:21:50.915        "data_size": 63488
00:21:50.915      }
00:21:50.915    ]
00:21:50.915  }'
00:21:50.915    23:55:21	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:50.915    23:55:21	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@657 -- # local timeout=532
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:50.915   23:55:21	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:50.915    23:55:21	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:50.915    23:55:21	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:51.173  [2024-12-13 23:55:21.693915] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:21:51.173   23:55:21	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:51.173    "name": "raid_bdev1",
00:21:51.173    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:51.173    "strip_size_kb": 0,
00:21:51.173    "state": "online",
00:21:51.173    "raid_level": "raid1",
00:21:51.173    "superblock": true,
00:21:51.173    "num_base_bdevs": 4,
00:21:51.173    "num_base_bdevs_discovered": 3,
00:21:51.173    "num_base_bdevs_operational": 3,
00:21:51.173    "process": {
00:21:51.173      "type": "rebuild",
00:21:51.173      "target": "spare",
00:21:51.173      "progress": {
00:21:51.173        "blocks": 32768,
00:21:51.173        "percent": 51
00:21:51.173      }
00:21:51.173    },
00:21:51.173    "base_bdevs_list": [
00:21:51.173      {
00:21:51.173        "name": "spare",
00:21:51.173        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:51.173        "is_configured": true,
00:21:51.173        "data_offset": 2048,
00:21:51.173        "data_size": 63488
00:21:51.173      },
00:21:51.173      {
00:21:51.173        "name": null,
00:21:51.173        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:51.173        "is_configured": false,
00:21:51.173        "data_offset": 2048,
00:21:51.173        "data_size": 63488
00:21:51.173      },
00:21:51.173      {
00:21:51.173        "name": "BaseBdev3",
00:21:51.173        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:51.173        "is_configured": true,
00:21:51.173        "data_offset": 2048,
00:21:51.173        "data_size": 63488
00:21:51.173      },
00:21:51.173      {
00:21:51.173        "name": "BaseBdev4",
00:21:51.173        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:51.173        "is_configured": true,
00:21:51.173        "data_offset": 2048,
00:21:51.173        "data_size": 63488
00:21:51.173      }
00:21:51.173    ]
00:21:51.173  }'
00:21:51.173    23:55:21	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:51.173   23:55:21	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:51.173    23:55:21	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:51.173   23:55:21	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:51.173   23:55:21	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:52.108   23:55:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:52.108    23:55:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:52.108    23:55:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:52.367   23:55:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:52.367    "name": "raid_bdev1",
00:21:52.367    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:52.367    "strip_size_kb": 0,
00:21:52.367    "state": "online",
00:21:52.367    "raid_level": "raid1",
00:21:52.367    "superblock": true,
00:21:52.367    "num_base_bdevs": 4,
00:21:52.367    "num_base_bdevs_discovered": 3,
00:21:52.367    "num_base_bdevs_operational": 3,
00:21:52.367    "process": {
00:21:52.367      "type": "rebuild",
00:21:52.367      "target": "spare",
00:21:52.367      "progress": {
00:21:52.367        "blocks": 55296,
00:21:52.367        "percent": 87
00:21:52.367      }
00:21:52.367    },
00:21:52.367    "base_bdevs_list": [
00:21:52.367      {
00:21:52.367        "name": "spare",
00:21:52.367        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:52.367        "is_configured": true,
00:21:52.367        "data_offset": 2048,
00:21:52.367        "data_size": 63488
00:21:52.367      },
00:21:52.367      {
00:21:52.367        "name": null,
00:21:52.367        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:52.367        "is_configured": false,
00:21:52.367        "data_offset": 2048,
00:21:52.367        "data_size": 63488
00:21:52.367      },
00:21:52.367      {
00:21:52.367        "name": "BaseBdev3",
00:21:52.367        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:52.367        "is_configured": true,
00:21:52.367        "data_offset": 2048,
00:21:52.367        "data_size": 63488
00:21:52.367      },
00:21:52.367      {
00:21:52.367        "name": "BaseBdev4",
00:21:52.367        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:52.367        "is_configured": true,
00:21:52.367        "data_offset": 2048,
00:21:52.367        "data_size": 63488
00:21:52.367      }
00:21:52.367    ]
00:21:52.367  }'
00:21:52.367    23:55:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:52.626   23:55:23	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:52.626    23:55:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:52.626   23:55:23	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:52.626   23:55:23	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:52.884  [2024-12-13 23:55:23.361674] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:21:52.884  [2024-12-13 23:55:23.461745] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:21:52.884  [2024-12-13 23:55:23.464371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:53.450   23:55:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:53.450    23:55:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:53.450    23:55:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:53.709   23:55:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:53.709    "name": "raid_bdev1",
00:21:53.709    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:53.709    "strip_size_kb": 0,
00:21:53.709    "state": "online",
00:21:53.709    "raid_level": "raid1",
00:21:53.709    "superblock": true,
00:21:53.709    "num_base_bdevs": 4,
00:21:53.709    "num_base_bdevs_discovered": 3,
00:21:53.709    "num_base_bdevs_operational": 3,
00:21:53.709    "base_bdevs_list": [
00:21:53.709      {
00:21:53.709        "name": "spare",
00:21:53.709        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:53.709        "is_configured": true,
00:21:53.709        "data_offset": 2048,
00:21:53.709        "data_size": 63488
00:21:53.709      },
00:21:53.709      {
00:21:53.709        "name": null,
00:21:53.709        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:53.709        "is_configured": false,
00:21:53.709        "data_offset": 2048,
00:21:53.709        "data_size": 63488
00:21:53.709      },
00:21:53.709      {
00:21:53.709        "name": "BaseBdev3",
00:21:53.709        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:53.709        "is_configured": true,
00:21:53.709        "data_offset": 2048,
00:21:53.709        "data_size": 63488
00:21:53.709      },
00:21:53.709      {
00:21:53.709        "name": "BaseBdev4",
00:21:53.709        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:53.709        "is_configured": true,
00:21:53.709        "data_offset": 2048,
00:21:53.709        "data_size": 63488
00:21:53.709      }
00:21:53.709    ]
00:21:53.709  }'
00:21:53.709    23:55:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:21:53.967    23:55:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@660 -- # break
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:53.967   23:55:24	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:53.968   23:55:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:53.968    23:55:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:53.968    23:55:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:54.226    "name": "raid_bdev1",
00:21:54.226    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:54.226    "strip_size_kb": 0,
00:21:54.226    "state": "online",
00:21:54.226    "raid_level": "raid1",
00:21:54.226    "superblock": true,
00:21:54.226    "num_base_bdevs": 4,
00:21:54.226    "num_base_bdevs_discovered": 3,
00:21:54.226    "num_base_bdevs_operational": 3,
00:21:54.226    "base_bdevs_list": [
00:21:54.226      {
00:21:54.226        "name": "spare",
00:21:54.226        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:54.226        "is_configured": true,
00:21:54.226        "data_offset": 2048,
00:21:54.226        "data_size": 63488
00:21:54.226      },
00:21:54.226      {
00:21:54.226        "name": null,
00:21:54.226        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:54.226        "is_configured": false,
00:21:54.226        "data_offset": 2048,
00:21:54.226        "data_size": 63488
00:21:54.226      },
00:21:54.226      {
00:21:54.226        "name": "BaseBdev3",
00:21:54.226        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:54.226        "is_configured": true,
00:21:54.226        "data_offset": 2048,
00:21:54.226        "data_size": 63488
00:21:54.226      },
00:21:54.226      {
00:21:54.226        "name": "BaseBdev4",
00:21:54.226        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:54.226        "is_configured": true,
00:21:54.226        "data_offset": 2048,
00:21:54.226        "data_size": 63488
00:21:54.226      }
00:21:54.226    ]
00:21:54.226  }'
00:21:54.226    23:55:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:54.226    23:55:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:54.226   23:55:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:54.226    23:55:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:54.226    23:55:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:54.485   23:55:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:54.485    "name": "raid_bdev1",
00:21:54.485    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:54.485    "strip_size_kb": 0,
00:21:54.486    "state": "online",
00:21:54.486    "raid_level": "raid1",
00:21:54.486    "superblock": true,
00:21:54.486    "num_base_bdevs": 4,
00:21:54.486    "num_base_bdevs_discovered": 3,
00:21:54.486    "num_base_bdevs_operational": 3,
00:21:54.486    "base_bdevs_list": [
00:21:54.486      {
00:21:54.486        "name": "spare",
00:21:54.486        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:54.486        "is_configured": true,
00:21:54.486        "data_offset": 2048,
00:21:54.486        "data_size": 63488
00:21:54.486      },
00:21:54.486      {
00:21:54.486        "name": null,
00:21:54.486        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:54.486        "is_configured": false,
00:21:54.486        "data_offset": 2048,
00:21:54.486        "data_size": 63488
00:21:54.486      },
00:21:54.486      {
00:21:54.486        "name": "BaseBdev3",
00:21:54.486        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:54.486        "is_configured": true,
00:21:54.486        "data_offset": 2048,
00:21:54.486        "data_size": 63488
00:21:54.486      },
00:21:54.486      {
00:21:54.486        "name": "BaseBdev4",
00:21:54.486        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:54.486        "is_configured": true,
00:21:54.486        "data_offset": 2048,
00:21:54.486        "data_size": 63488
00:21:54.486      }
00:21:54.486    ]
00:21:54.486  }'
00:21:54.486   23:55:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:54.486   23:55:25	-- common/autotest_common.sh@10 -- # set +x
00:21:55.052   23:55:25	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:21:55.324  [2024-12-13 23:55:25.984480] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:55.324  [2024-12-13 23:55:25.984523] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:55.626  
00:21:55.626                                                                                                  Latency(us)
00:21:55.626  
[2024-12-13T23:55:26.358Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:55.626  
[2024-12-13T23:55:26.358Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:21:55.626  	 raid_bdev1          :      11.50     106.76     320.28       0.00     0.00   13481.14     294.17  110100.48
00:21:55.626  
[2024-12-13T23:55:26.358Z]  ===================================================================================================================
00:21:55.626  
[2024-12-13T23:55:26.358Z]  Total                       :                106.76     320.28       0.00     0.00   13481.14     294.17  110100.48
00:21:55.626  0
00:21:55.626  [2024-12-13 23:55:26.072472] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:55.626  [2024-12-13 23:55:26.072516] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:55.626  [2024-12-13 23:55:26.072614] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:55.626  [2024-12-13 23:55:26.072626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:21:55.626    23:55:26	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:55.626    23:55:26	-- bdev/bdev_raid.sh@671 -- # jq length
00:21:55.626   23:55:26	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:21:55.626   23:55:26	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:21:55.626   23:55:26	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@12 -- # local i
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:55.626   23:55:26	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:21:55.890  /dev/nbd0
00:21:55.890    23:55:26	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:21:55.890   23:55:26	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:21:55.890   23:55:26	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:21:55.890   23:55:26	-- common/autotest_common.sh@867 -- # local i
00:21:55.890   23:55:26	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:55.890   23:55:26	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:55.890   23:55:26	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:21:55.890   23:55:26	-- common/autotest_common.sh@871 -- # break
00:21:55.890   23:55:26	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:55.890   23:55:26	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:55.890   23:55:26	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:55.890  1+0 records in
00:21:55.890  1+0 records out
00:21:55.890  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047409 s, 8.6 MB/s
00:21:55.890    23:55:26	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:56.148   23:55:26	-- common/autotest_common.sh@884 -- # size=4096
00:21:56.148   23:55:26	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:56.148   23:55:26	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:56.148   23:55:26	-- common/autotest_common.sh@887 -- # return 0
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@677 -- # '[' -z '' ']'
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@678 -- # continue
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']'
00:21:56.148   23:55:26	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@12 -- # local i
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:56.148   23:55:26	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:21:56.407  /dev/nbd1
00:21:56.407    23:55:26	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:56.407   23:55:26	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:56.407   23:55:26	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:56.407   23:55:26	-- common/autotest_common.sh@867 -- # local i
00:21:56.407   23:55:26	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:56.407   23:55:26	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:56.407   23:55:26	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:56.407   23:55:26	-- common/autotest_common.sh@871 -- # break
00:21:56.407   23:55:26	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:56.407   23:55:26	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:56.407   23:55:26	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:56.407  1+0 records in
00:21:56.407  1+0 records out
00:21:56.407  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536924 s, 7.6 MB/s
00:21:56.407    23:55:26	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:56.407   23:55:26	-- common/autotest_common.sh@884 -- # size=4096
00:21:56.407   23:55:26	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:56.407   23:55:26	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:56.407   23:55:26	-- common/autotest_common.sh@887 -- # return 0
00:21:56.407   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:56.407   23:55:26	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:56.407   23:55:26	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:21:56.407   23:55:27	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@51 -- # local i
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:56.407   23:55:27	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:56.666    23:55:27	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@41 -- # break
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@45 -- # return 0
00:21:56.666   23:55:27	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:56.666   23:55:27	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']'
00:21:56.666   23:55:27	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@12 -- # local i
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:56.666   23:55:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:21:57.234  /dev/nbd1
00:21:57.234    23:55:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:57.234   23:55:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:57.234   23:55:27	-- common/autotest_common.sh@867 -- # local i
00:21:57.234   23:55:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:57.234   23:55:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:57.234   23:55:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:57.234   23:55:27	-- common/autotest_common.sh@871 -- # break
00:21:57.234   23:55:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:57.234   23:55:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:57.234   23:55:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:57.234  1+0 records in
00:21:57.234  1+0 records out
00:21:57.234  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677096 s, 6.0 MB/s
00:21:57.234    23:55:27	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:57.234   23:55:27	-- common/autotest_common.sh@884 -- # size=4096
00:21:57.234   23:55:27	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:57.234   23:55:27	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:57.234   23:55:27	-- common/autotest_common.sh@887 -- # return 0
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:57.234   23:55:27	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:21:57.234   23:55:27	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@51 -- # local i
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:57.234   23:55:27	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:57.492    23:55:28	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@41 -- # break
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@45 -- # return 0
00:21:57.493   23:55:28	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@51 -- # local i
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:57.493   23:55:28	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:57.751    23:55:28	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@41 -- # break
00:21:57.751   23:55:28	-- bdev/nbd_common.sh@45 -- # return 0
00:21:57.751   23:55:28	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:21:57.751   23:55:28	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:57.751   23:55:28	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:21:57.751   23:55:28	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:21:58.010   23:55:28	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:21:58.268  [2024-12-13 23:55:28.840513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:21:58.268  [2024-12-13 23:55:28.840727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:58.268  [2024-12-13 23:55:28.840808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:21:58.268  [2024-12-13 23:55:28.841075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:58.268  [2024-12-13 23:55:28.843202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:58.268  [2024-12-13 23:55:28.843447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:21:58.268  [2024-12-13 23:55:28.843668] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:21:58.268  [2024-12-13 23:55:28.843863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:58.268  BaseBdev1
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@695 -- # '[' -z '' ']'
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@696 -- # continue
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:21:58.268   23:55:28	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:21:58.527   23:55:29	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:21:58.527  [2024-12-13 23:55:29.260657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:21:58.527  [2024-12-13 23:55:29.260914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:58.527  [2024-12-13 23:55:29.260992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:21:58.527  [2024-12-13 23:55:29.261112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:58.786  [2024-12-13 23:55:29.261629] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:58.786  [2024-12-13 23:55:29.261820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:21:58.786  [2024-12-13 23:55:29.262065] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:21:58.786  [2024-12-13 23:55:29.262176] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1)
00:21:58.786  [2024-12-13 23:55:29.262261] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:58.786  [2024-12-13 23:55:29.262407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring
00:21:58.786  [2024-12-13 23:55:29.262597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:58.786  BaseBdev3
00:21:58.786   23:55:29	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:58.786   23:55:29	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:21:58.786   23:55:29	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:21:59.045   23:55:29	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:21:59.045  [2024-12-13 23:55:29.712768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:21:59.045  [2024-12-13 23:55:29.712968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:59.045  [2024-12-13 23:55:29.713041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:21:59.045  [2024-12-13 23:55:29.713316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:59.045  [2024-12-13 23:55:29.713790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:59.045  [2024-12-13 23:55:29.713977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:21:59.045  [2024-12-13 23:55:29.714178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:21:59.045  [2024-12-13 23:55:29.714307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:59.045  BaseBdev4
00:21:59.045   23:55:29	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:21:59.304   23:55:29	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:59.562  [2024-12-13 23:55:30.092900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:59.563  [2024-12-13 23:55:30.093085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:59.563  [2024-12-13 23:55:30.093154] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980
00:21:59.563  [2024-12-13 23:55:30.093272] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:59.563  [2024-12-13 23:55:30.093720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:59.563  [2024-12-13 23:55:30.093918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:59.563  [2024-12-13 23:55:30.094112] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:21:59.563  [2024-12-13 23:55:30.094239] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:59.563  spare
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:59.563   23:55:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:59.563    23:55:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:59.563    23:55:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:59.563  [2024-12-13 23:55:30.194385] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380
00:21:59.563  [2024-12-13 23:55:30.194551] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:21:59.563  [2024-12-13 23:55:30.194713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230
00:21:59.563  [2024-12-13 23:55:30.195292] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380
00:21:59.563  [2024-12-13 23:55:30.195448] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380
00:21:59.563  [2024-12-13 23:55:30.195722] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:59.821   23:55:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:59.822    "name": "raid_bdev1",
00:21:59.822    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:21:59.822    "strip_size_kb": 0,
00:21:59.822    "state": "online",
00:21:59.822    "raid_level": "raid1",
00:21:59.822    "superblock": true,
00:21:59.822    "num_base_bdevs": 4,
00:21:59.822    "num_base_bdevs_discovered": 3,
00:21:59.822    "num_base_bdevs_operational": 3,
00:21:59.822    "base_bdevs_list": [
00:21:59.822      {
00:21:59.822        "name": "spare",
00:21:59.822        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:21:59.822        "is_configured": true,
00:21:59.822        "data_offset": 2048,
00:21:59.822        "data_size": 63488
00:21:59.822      },
00:21:59.822      {
00:21:59.822        "name": null,
00:21:59.822        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:59.822        "is_configured": false,
00:21:59.822        "data_offset": 2048,
00:21:59.822        "data_size": 63488
00:21:59.822      },
00:21:59.822      {
00:21:59.822        "name": "BaseBdev3",
00:21:59.822        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:21:59.822        "is_configured": true,
00:21:59.822        "data_offset": 2048,
00:21:59.822        "data_size": 63488
00:21:59.822      },
00:21:59.822      {
00:21:59.822        "name": "BaseBdev4",
00:21:59.822        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:21:59.822        "is_configured": true,
00:21:59.822        "data_offset": 2048,
00:21:59.822        "data_size": 63488
00:21:59.822      }
00:21:59.822    ]
00:21:59.822  }'
00:21:59.822   23:55:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:59.822   23:55:30	-- common/autotest_common.sh@10 -- # set +x
00:22:00.388   23:55:30	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:00.388   23:55:30	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:00.388   23:55:30	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:00.388   23:55:30	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:00.388   23:55:30	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:00.388    23:55:30	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:00.388    23:55:30	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:00.646   23:55:31	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:00.646    "name": "raid_bdev1",
00:22:00.646    "uuid": "5b26395d-667b-4e45-ad18-71b44969e408",
00:22:00.646    "strip_size_kb": 0,
00:22:00.646    "state": "online",
00:22:00.646    "raid_level": "raid1",
00:22:00.646    "superblock": true,
00:22:00.646    "num_base_bdevs": 4,
00:22:00.646    "num_base_bdevs_discovered": 3,
00:22:00.646    "num_base_bdevs_operational": 3,
00:22:00.646    "base_bdevs_list": [
00:22:00.646      {
00:22:00.646        "name": "spare",
00:22:00.646        "uuid": "52e29146-df97-5448-82d2-49c653f7929e",
00:22:00.646        "is_configured": true,
00:22:00.646        "data_offset": 2048,
00:22:00.646        "data_size": 63488
00:22:00.646      },
00:22:00.646      {
00:22:00.646        "name": null,
00:22:00.646        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:00.646        "is_configured": false,
00:22:00.646        "data_offset": 2048,
00:22:00.646        "data_size": 63488
00:22:00.646      },
00:22:00.646      {
00:22:00.646        "name": "BaseBdev3",
00:22:00.646        "uuid": "432ac017-f3cb-55ca-9116-d71b9f443ec3",
00:22:00.646        "is_configured": true,
00:22:00.646        "data_offset": 2048,
00:22:00.646        "data_size": 63488
00:22:00.646      },
00:22:00.646      {
00:22:00.646        "name": "BaseBdev4",
00:22:00.646        "uuid": "b4701e34-5269-5e1c-b6b2-e2fc93b35916",
00:22:00.646        "is_configured": true,
00:22:00.646        "data_offset": 2048,
00:22:00.646        "data_size": 63488
00:22:00.646      }
00:22:00.646    ]
00:22:00.646  }'
00:22:00.646    23:55:31	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:00.646   23:55:31	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:00.646    23:55:31	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:00.646   23:55:31	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:00.646    23:55:31	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:22:00.646    23:55:31	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:00.904   23:55:31	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:22:00.904   23:55:31	-- bdev/bdev_raid.sh@709 -- # killprocess 126166
00:22:00.904   23:55:31	-- common/autotest_common.sh@936 -- # '[' -z 126166 ']'
00:22:00.904   23:55:31	-- common/autotest_common.sh@940 -- # kill -0 126166
00:22:00.904    23:55:31	-- common/autotest_common.sh@941 -- # uname
00:22:00.904   23:55:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:00.904    23:55:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126166
00:22:00.904   23:55:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:00.904   23:55:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:00.904   23:55:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 126166'
00:22:00.904  killing process with pid 126166
00:22:00.904   23:55:31	-- common/autotest_common.sh@955 -- # kill 126166
00:22:00.904  Received shutdown signal, test time was about 16.963362 seconds
00:22:00.904  
00:22:00.904                                                                                                  Latency(us)
00:22:00.904  
[2024-12-13T23:55:31.636Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:00.904  
[2024-12-13T23:55:31.636Z]  ===================================================================================================================
00:22:00.904  
[2024-12-13T23:55:31.636Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:00.904   23:55:31	-- common/autotest_common.sh@960 -- # wait 126166
00:22:00.904  [2024-12-13 23:55:31.513396] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:00.904  [2024-12-13 23:55:31.513638] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:00.904  [2024-12-13 23:55:31.513874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:00.904  [2024-12-13 23:55:31.514035] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline
00:22:01.162  [2024-12-13 23:55:31.793909] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:02.096   23:55:32	-- bdev/bdev_raid.sh@711 -- # return 0
00:22:02.096  
00:22:02.096  real	0m23.170s
00:22:02.096  user	0m37.187s
00:22:02.096  sys	0m2.780s
00:22:02.096  ************************************
00:22:02.096  END TEST raid_rebuild_test_sb_io
00:22:02.096  ************************************
00:22:02.096   23:55:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:02.096   23:55:32	-- common/autotest_common.sh@10 -- # set +x
00:22:02.096   23:55:32	-- bdev/bdev_raid.sh@742 -- # '[' y == y ']'
00:22:02.096   23:55:32	-- bdev/bdev_raid.sh@743 -- # for n in {3..4}
00:22:02.096   23:55:32	-- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false
00:22:02.096   23:55:32	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:22:02.096   23:55:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:02.096   23:55:32	-- common/autotest_common.sh@10 -- # set +x
00:22:02.354  ************************************
00:22:02.354  START TEST raid5f_state_function_test
00:22:02.354  ************************************
00:22:02.354   23:55:32	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:02.354    23:55:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@226 -- # raid_pid=126784
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:22:02.354  Process raid pid: 126784
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126784'
00:22:02.354   23:55:32	-- bdev/bdev_raid.sh@228 -- # waitforlisten 126784 /var/tmp/spdk-raid.sock
00:22:02.354   23:55:32	-- common/autotest_common.sh@829 -- # '[' -z 126784 ']'
00:22:02.354   23:55:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:02.354   23:55:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:02.354   23:55:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:02.354  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:02.354   23:55:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:02.354   23:55:32	-- common/autotest_common.sh@10 -- # set +x
00:22:02.354  [2024-12-13 23:55:32.929453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:02.354  [2024-12-13 23:55:32.929948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:02.613  [2024-12-13 23:55:33.094273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:02.613  [2024-12-13 23:55:33.250880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:02.871  [2024-12-13 23:55:33.419785] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:03.437   23:55:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:03.437   23:55:33	-- common/autotest_common.sh@862 -- # return 0
00:22:03.437   23:55:33	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:03.437  [2024-12-13 23:55:34.131433] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:22:03.437  [2024-12-13 23:55:34.131661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:22:03.437  [2024-12-13 23:55:34.131792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:03.437  [2024-12-13 23:55:34.131852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:03.437  [2024-12-13 23:55:34.131987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:03.437  [2024-12-13 23:55:34.132084] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:03.437   23:55:34	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:03.437   23:55:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:03.437   23:55:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:03.438   23:55:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:03.438    23:55:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:03.438    23:55:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:03.696   23:55:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:03.696    "name": "Existed_Raid",
00:22:03.696    "uuid": "00000000-0000-0000-0000-000000000000",
00:22:03.696    "strip_size_kb": 64,
00:22:03.696    "state": "configuring",
00:22:03.696    "raid_level": "raid5f",
00:22:03.696    "superblock": false,
00:22:03.696    "num_base_bdevs": 3,
00:22:03.696    "num_base_bdevs_discovered": 0,
00:22:03.696    "num_base_bdevs_operational": 3,
00:22:03.696    "base_bdevs_list": [
00:22:03.696      {
00:22:03.696        "name": "BaseBdev1",
00:22:03.696        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:03.696        "is_configured": false,
00:22:03.696        "data_offset": 0,
00:22:03.696        "data_size": 0
00:22:03.696      },
00:22:03.696      {
00:22:03.696        "name": "BaseBdev2",
00:22:03.696        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:03.696        "is_configured": false,
00:22:03.696        "data_offset": 0,
00:22:03.696        "data_size": 0
00:22:03.696      },
00:22:03.696      {
00:22:03.696        "name": "BaseBdev3",
00:22:03.696        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:03.696        "is_configured": false,
00:22:03.696        "data_offset": 0,
00:22:03.696        "data_size": 0
00:22:03.696      }
00:22:03.696    ]
00:22:03.696  }'
00:22:03.696   23:55:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:03.696   23:55:34	-- common/autotest_common.sh@10 -- # set +x
00:22:04.262   23:55:34	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:22:04.522  [2024-12-13 23:55:35.171530] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:22:04.522  [2024-12-13 23:55:35.171694] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:22:04.522   23:55:35	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:04.781  [2024-12-13 23:55:35.343596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:22:04.781  [2024-12-13 23:55:35.343775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:22:04.781  [2024-12-13 23:55:35.343908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:04.781  [2024-12-13 23:55:35.343976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:04.781  [2024-12-13 23:55:35.344106] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:04.781  [2024-12-13 23:55:35.344171] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:04.781   23:55:35	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:22:05.039  [2024-12-13 23:55:35.614355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:05.039  BaseBdev1
00:22:05.039   23:55:35	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:22:05.039   23:55:35	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:22:05.039   23:55:35	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:05.039   23:55:35	-- common/autotest_common.sh@899 -- # local i
00:22:05.039   23:55:35	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:05.039   23:55:35	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:05.039   23:55:35	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:05.298   23:55:35	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:22:05.298  [
00:22:05.298    {
00:22:05.298      "name": "BaseBdev1",
00:22:05.298      "aliases": [
00:22:05.298        "86a99ade-cc06-48ed-9554-7f3f1a8dc54d"
00:22:05.298      ],
00:22:05.298      "product_name": "Malloc disk",
00:22:05.298      "block_size": 512,
00:22:05.298      "num_blocks": 65536,
00:22:05.298      "uuid": "86a99ade-cc06-48ed-9554-7f3f1a8dc54d",
00:22:05.298      "assigned_rate_limits": {
00:22:05.298        "rw_ios_per_sec": 0,
00:22:05.298        "rw_mbytes_per_sec": 0,
00:22:05.298        "r_mbytes_per_sec": 0,
00:22:05.298        "w_mbytes_per_sec": 0
00:22:05.298      },
00:22:05.298      "claimed": true,
00:22:05.298      "claim_type": "exclusive_write",
00:22:05.298      "zoned": false,
00:22:05.298      "supported_io_types": {
00:22:05.298        "read": true,
00:22:05.298        "write": true,
00:22:05.298        "unmap": true,
00:22:05.298        "write_zeroes": true,
00:22:05.298        "flush": true,
00:22:05.298        "reset": true,
00:22:05.298        "compare": false,
00:22:05.298        "compare_and_write": false,
00:22:05.298        "abort": true,
00:22:05.298        "nvme_admin": false,
00:22:05.298        "nvme_io": false
00:22:05.298      },
00:22:05.298      "memory_domains": [
00:22:05.298        {
00:22:05.298          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:05.298          "dma_device_type": 2
00:22:05.298        }
00:22:05.298      ],
00:22:05.298      "driver_specific": {}
00:22:05.298    }
00:22:05.298  ]
00:22:05.298   23:55:35	-- common/autotest_common.sh@905 -- # return 0
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:05.298   23:55:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:05.298    23:55:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:05.298    23:55:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:05.557   23:55:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:05.557    "name": "Existed_Raid",
00:22:05.557    "uuid": "00000000-0000-0000-0000-000000000000",
00:22:05.557    "strip_size_kb": 64,
00:22:05.557    "state": "configuring",
00:22:05.557    "raid_level": "raid5f",
00:22:05.557    "superblock": false,
00:22:05.557    "num_base_bdevs": 3,
00:22:05.557    "num_base_bdevs_discovered": 1,
00:22:05.557    "num_base_bdevs_operational": 3,
00:22:05.557    "base_bdevs_list": [
00:22:05.557      {
00:22:05.557        "name": "BaseBdev1",
00:22:05.557        "uuid": "86a99ade-cc06-48ed-9554-7f3f1a8dc54d",
00:22:05.557        "is_configured": true,
00:22:05.557        "data_offset": 0,
00:22:05.557        "data_size": 65536
00:22:05.557      },
00:22:05.557      {
00:22:05.557        "name": "BaseBdev2",
00:22:05.557        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:05.557        "is_configured": false,
00:22:05.557        "data_offset": 0,
00:22:05.557        "data_size": 0
00:22:05.557      },
00:22:05.557      {
00:22:05.557        "name": "BaseBdev3",
00:22:05.557        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:05.557        "is_configured": false,
00:22:05.557        "data_offset": 0,
00:22:05.557        "data_size": 0
00:22:05.557      }
00:22:05.557    ]
00:22:05.557  }'
00:22:05.557   23:55:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:05.557   23:55:36	-- common/autotest_common.sh@10 -- # set +x
00:22:06.491   23:55:36	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:22:06.491  [2024-12-13 23:55:37.062627] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:22:06.491  [2024-12-13 23:55:37.062823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:22:06.491   23:55:37	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:22:06.491   23:55:37	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:06.749  [2024-12-13 23:55:37.298728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:06.749  [2024-12-13 23:55:37.300754] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:06.749  [2024-12-13 23:55:37.300951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:06.749  [2024-12-13 23:55:37.301097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:06.749  [2024-12-13 23:55:37.301166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:06.749   23:55:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:06.749    23:55:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:06.749    23:55:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:07.007   23:55:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:07.007    "name": "Existed_Raid",
00:22:07.007    "uuid": "00000000-0000-0000-0000-000000000000",
00:22:07.007    "strip_size_kb": 64,
00:22:07.007    "state": "configuring",
00:22:07.008    "raid_level": "raid5f",
00:22:07.008    "superblock": false,
00:22:07.008    "num_base_bdevs": 3,
00:22:07.008    "num_base_bdevs_discovered": 1,
00:22:07.008    "num_base_bdevs_operational": 3,
00:22:07.008    "base_bdevs_list": [
00:22:07.008      {
00:22:07.008        "name": "BaseBdev1",
00:22:07.008        "uuid": "86a99ade-cc06-48ed-9554-7f3f1a8dc54d",
00:22:07.008        "is_configured": true,
00:22:07.008        "data_offset": 0,
00:22:07.008        "data_size": 65536
00:22:07.008      },
00:22:07.008      {
00:22:07.008        "name": "BaseBdev2",
00:22:07.008        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:07.008        "is_configured": false,
00:22:07.008        "data_offset": 0,
00:22:07.008        "data_size": 0
00:22:07.008      },
00:22:07.008      {
00:22:07.008        "name": "BaseBdev3",
00:22:07.008        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:07.008        "is_configured": false,
00:22:07.008        "data_offset": 0,
00:22:07.008        "data_size": 0
00:22:07.008      }
00:22:07.008    ]
00:22:07.008  }'
00:22:07.008   23:55:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:07.008   23:55:37	-- common/autotest_common.sh@10 -- # set +x
00:22:07.574   23:55:38	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:22:07.831  [2024-12-13 23:55:38.338285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:22:07.832  BaseBdev2
00:22:07.832   23:55:38	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:22:07.832   23:55:38	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:22:07.832   23:55:38	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:07.832   23:55:38	-- common/autotest_common.sh@899 -- # local i
00:22:07.832   23:55:38	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:07.832   23:55:38	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:07.832   23:55:38	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:08.090   23:55:38	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:22:08.348  [
00:22:08.348    {
00:22:08.348      "name": "BaseBdev2",
00:22:08.348      "aliases": [
00:22:08.348        "58e912f8-3ab2-4ff2-a09d-34206c9d8d93"
00:22:08.348      ],
00:22:08.348      "product_name": "Malloc disk",
00:22:08.348      "block_size": 512,
00:22:08.348      "num_blocks": 65536,
00:22:08.348      "uuid": "58e912f8-3ab2-4ff2-a09d-34206c9d8d93",
00:22:08.348      "assigned_rate_limits": {
00:22:08.348        "rw_ios_per_sec": 0,
00:22:08.348        "rw_mbytes_per_sec": 0,
00:22:08.348        "r_mbytes_per_sec": 0,
00:22:08.348        "w_mbytes_per_sec": 0
00:22:08.348      },
00:22:08.348      "claimed": true,
00:22:08.348      "claim_type": "exclusive_write",
00:22:08.348      "zoned": false,
00:22:08.348      "supported_io_types": {
00:22:08.348        "read": true,
00:22:08.348        "write": true,
00:22:08.348        "unmap": true,
00:22:08.348        "write_zeroes": true,
00:22:08.348        "flush": true,
00:22:08.348        "reset": true,
00:22:08.348        "compare": false,
00:22:08.348        "compare_and_write": false,
00:22:08.348        "abort": true,
00:22:08.348        "nvme_admin": false,
00:22:08.348        "nvme_io": false
00:22:08.348      },
00:22:08.348      "memory_domains": [
00:22:08.348        {
00:22:08.348          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:08.348          "dma_device_type": 2
00:22:08.348        }
00:22:08.348      ],
00:22:08.348      "driver_specific": {}
00:22:08.348    }
00:22:08.348  ]
00:22:08.348   23:55:38	-- common/autotest_common.sh@905 -- # return 0
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:08.348   23:55:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:08.348    23:55:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:08.348    23:55:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:08.607   23:55:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:08.607    "name": "Existed_Raid",
00:22:08.607    "uuid": "00000000-0000-0000-0000-000000000000",
00:22:08.607    "strip_size_kb": 64,
00:22:08.607    "state": "configuring",
00:22:08.607    "raid_level": "raid5f",
00:22:08.607    "superblock": false,
00:22:08.607    "num_base_bdevs": 3,
00:22:08.607    "num_base_bdevs_discovered": 2,
00:22:08.607    "num_base_bdevs_operational": 3,
00:22:08.607    "base_bdevs_list": [
00:22:08.607      {
00:22:08.607        "name": "BaseBdev1",
00:22:08.607        "uuid": "86a99ade-cc06-48ed-9554-7f3f1a8dc54d",
00:22:08.607        "is_configured": true,
00:22:08.607        "data_offset": 0,
00:22:08.607        "data_size": 65536
00:22:08.607      },
00:22:08.607      {
00:22:08.607        "name": "BaseBdev2",
00:22:08.607        "uuid": "58e912f8-3ab2-4ff2-a09d-34206c9d8d93",
00:22:08.607        "is_configured": true,
00:22:08.607        "data_offset": 0,
00:22:08.607        "data_size": 65536
00:22:08.607      },
00:22:08.607      {
00:22:08.607        "name": "BaseBdev3",
00:22:08.607        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:08.607        "is_configured": false,
00:22:08.607        "data_offset": 0,
00:22:08.607        "data_size": 0
00:22:08.607      }
00:22:08.607    ]
00:22:08.607  }'
00:22:08.607   23:55:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:08.607   23:55:39	-- common/autotest_common.sh@10 -- # set +x
00:22:09.174   23:55:39	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:22:09.431  [2024-12-13 23:55:39.986048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:22:09.431  [2024-12-13 23:55:39.986522] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:22:09.431  [2024-12-13 23:55:39.986801] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:22:09.431  [2024-12-13 23:55:39.987328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0
00:22:09.431  [2024-12-13 23:55:39.999813] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:22:09.431  [2024-12-13 23:55:39.999980] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:22:09.431  [2024-12-13 23:55:40.000497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:09.432  BaseBdev3
00:22:09.432   23:55:40	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:22:09.432   23:55:40	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:22:09.432   23:55:40	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:09.432   23:55:40	-- common/autotest_common.sh@899 -- # local i
00:22:09.432   23:55:40	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:09.432   23:55:40	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:09.432   23:55:40	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:09.689   23:55:40	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:22:09.948  [
00:22:09.948    {
00:22:09.948      "name": "BaseBdev3",
00:22:09.948      "aliases": [
00:22:09.948        "70afebaa-003b-4524-bcc9-03b4caf1c836"
00:22:09.948      ],
00:22:09.948      "product_name": "Malloc disk",
00:22:09.948      "block_size": 512,
00:22:09.948      "num_blocks": 65536,
00:22:09.948      "uuid": "70afebaa-003b-4524-bcc9-03b4caf1c836",
00:22:09.948      "assigned_rate_limits": {
00:22:09.948        "rw_ios_per_sec": 0,
00:22:09.948        "rw_mbytes_per_sec": 0,
00:22:09.948        "r_mbytes_per_sec": 0,
00:22:09.948        "w_mbytes_per_sec": 0
00:22:09.948      },
00:22:09.948      "claimed": true,
00:22:09.948      "claim_type": "exclusive_write",
00:22:09.948      "zoned": false,
00:22:09.948      "supported_io_types": {
00:22:09.948        "read": true,
00:22:09.948        "write": true,
00:22:09.948        "unmap": true,
00:22:09.948        "write_zeroes": true,
00:22:09.948        "flush": true,
00:22:09.948        "reset": true,
00:22:09.948        "compare": false,
00:22:09.948        "compare_and_write": false,
00:22:09.948        "abort": true,
00:22:09.948        "nvme_admin": false,
00:22:09.948        "nvme_io": false
00:22:09.948      },
00:22:09.948      "memory_domains": [
00:22:09.948        {
00:22:09.948          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:09.948          "dma_device_type": 2
00:22:09.948        }
00:22:09.948      ],
00:22:09.948      "driver_specific": {}
00:22:09.948    }
00:22:09.948  ]
00:22:09.948   23:55:40	-- common/autotest_common.sh@905 -- # return 0
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:09.948   23:55:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:09.948    23:55:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:09.948    23:55:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:10.206   23:55:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:10.206    "name": "Existed_Raid",
00:22:10.206    "uuid": "9d06ee9a-8477-474c-bbdf-504cf0bd9f63",
00:22:10.206    "strip_size_kb": 64,
00:22:10.206    "state": "online",
00:22:10.206    "raid_level": "raid5f",
00:22:10.206    "superblock": false,
00:22:10.206    "num_base_bdevs": 3,
00:22:10.206    "num_base_bdevs_discovered": 3,
00:22:10.206    "num_base_bdevs_operational": 3,
00:22:10.206    "base_bdevs_list": [
00:22:10.206      {
00:22:10.206        "name": "BaseBdev1",
00:22:10.206        "uuid": "86a99ade-cc06-48ed-9554-7f3f1a8dc54d",
00:22:10.206        "is_configured": true,
00:22:10.206        "data_offset": 0,
00:22:10.206        "data_size": 65536
00:22:10.206      },
00:22:10.206      {
00:22:10.206        "name": "BaseBdev2",
00:22:10.206        "uuid": "58e912f8-3ab2-4ff2-a09d-34206c9d8d93",
00:22:10.206        "is_configured": true,
00:22:10.206        "data_offset": 0,
00:22:10.206        "data_size": 65536
00:22:10.206      },
00:22:10.206      {
00:22:10.206        "name": "BaseBdev3",
00:22:10.206        "uuid": "70afebaa-003b-4524-bcc9-03b4caf1c836",
00:22:10.206        "is_configured": true,
00:22:10.206        "data_offset": 0,
00:22:10.206        "data_size": 65536
00:22:10.206      }
00:22:10.206    ]
00:22:10.206  }'
00:22:10.206   23:55:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:10.206   23:55:40	-- common/autotest_common.sh@10 -- # set +x
00:22:10.772   23:55:41	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:22:11.030  [2024-12-13 23:55:41.622071] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@196 -- # return 0
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:11.030   23:55:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:11.030    23:55:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:11.030    23:55:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:11.287   23:55:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:11.287    "name": "Existed_Raid",
00:22:11.287    "uuid": "9d06ee9a-8477-474c-bbdf-504cf0bd9f63",
00:22:11.287    "strip_size_kb": 64,
00:22:11.287    "state": "online",
00:22:11.287    "raid_level": "raid5f",
00:22:11.287    "superblock": false,
00:22:11.287    "num_base_bdevs": 3,
00:22:11.287    "num_base_bdevs_discovered": 2,
00:22:11.288    "num_base_bdevs_operational": 2,
00:22:11.288    "base_bdevs_list": [
00:22:11.288      {
00:22:11.288        "name": null,
00:22:11.288        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:11.288        "is_configured": false,
00:22:11.288        "data_offset": 0,
00:22:11.288        "data_size": 65536
00:22:11.288      },
00:22:11.288      {
00:22:11.288        "name": "BaseBdev2",
00:22:11.288        "uuid": "58e912f8-3ab2-4ff2-a09d-34206c9d8d93",
00:22:11.288        "is_configured": true,
00:22:11.288        "data_offset": 0,
00:22:11.288        "data_size": 65536
00:22:11.288      },
00:22:11.288      {
00:22:11.288        "name": "BaseBdev3",
00:22:11.288        "uuid": "70afebaa-003b-4524-bcc9-03b4caf1c836",
00:22:11.288        "is_configured": true,
00:22:11.288        "data_offset": 0,
00:22:11.288        "data_size": 65536
00:22:11.288      }
00:22:11.288    ]
00:22:11.288  }'
00:22:11.288   23:55:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:11.288   23:55:41	-- common/autotest_common.sh@10 -- # set +x
00:22:11.853   23:55:42	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:22:11.853   23:55:42	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:11.853    23:55:42	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:11.853    23:55:42	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:12.110   23:55:42	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:12.110   23:55:42	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:12.110   23:55:42	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:22:12.369  [2024-12-13 23:55:42.993791] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:22:12.369  [2024-12-13 23:55:42.993947] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:12.369  [2024-12-13 23:55:42.994094] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:12.369   23:55:43	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:12.369   23:55:43	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:12.369    23:55:43	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:12.369    23:55:43	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:12.627   23:55:43	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:12.627   23:55:43	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:12.627   23:55:43	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:22:12.885  [2024-12-13 23:55:43.565793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:22:12.885  [2024-12-13 23:55:43.565990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:22:13.143   23:55:43	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:13.143   23:55:43	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:13.143    23:55:43	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:13.143    23:55:43	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:22:13.143   23:55:43	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:22:13.143   23:55:43	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:22:13.143   23:55:43	-- bdev/bdev_raid.sh@287 -- # killprocess 126784
00:22:13.143   23:55:43	-- common/autotest_common.sh@936 -- # '[' -z 126784 ']'
00:22:13.143   23:55:43	-- common/autotest_common.sh@940 -- # kill -0 126784
00:22:13.143    23:55:43	-- common/autotest_common.sh@941 -- # uname
00:22:13.143   23:55:43	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:13.143    23:55:43	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126784
00:22:13.143  killing process with pid 126784
00:22:13.143   23:55:43	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:13.143   23:55:43	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:13.143   23:55:43	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 126784'
00:22:13.143   23:55:43	-- common/autotest_common.sh@955 -- # kill 126784
00:22:13.143   23:55:43	-- common/autotest_common.sh@960 -- # wait 126784
00:22:13.143  [2024-12-13 23:55:43.859151] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:13.143  [2024-12-13 23:55:43.859254] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:14.078  ************************************
00:22:14.078  END TEST raid5f_state_function_test
00:22:14.078  ************************************
00:22:14.078   23:55:44	-- bdev/bdev_raid.sh@289 -- # return 0
00:22:14.078  
00:22:14.078  real	0m11.940s
00:22:14.078  user	0m21.023s
00:22:14.078  sys	0m1.495s
00:22:14.078   23:55:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:14.078   23:55:44	-- common/autotest_common.sh@10 -- # set +x
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true
00:22:14.336   23:55:44	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:22:14.336   23:55:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:14.336   23:55:44	-- common/autotest_common.sh@10 -- # set +x
00:22:14.336  ************************************
00:22:14.336  START TEST raid5f_state_function_test_sb
00:22:14.336  ************************************
00:22:14.336   23:55:44	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:22:14.336    23:55:44	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@226 -- # raid_pid=127161
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127161'
00:22:14.336  Process raid pid: 127161
00:22:14.336   23:55:44	-- bdev/bdev_raid.sh@228 -- # waitforlisten 127161 /var/tmp/spdk-raid.sock
00:22:14.336   23:55:44	-- common/autotest_common.sh@829 -- # '[' -z 127161 ']'
00:22:14.336   23:55:44	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:14.336   23:55:44	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:14.336   23:55:44	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:14.336  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:14.336   23:55:44	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:14.336   23:55:44	-- common/autotest_common.sh@10 -- # set +x
00:22:14.336  [2024-12-13 23:55:44.918163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:14.336  [2024-12-13 23:55:44.918603] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:14.595  [2024-12-13 23:55:45.089779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:14.595  [2024-12-13 23:55:45.252015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:14.854  [2024-12-13 23:55:45.422206] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:15.420   23:55:45	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:15.420   23:55:45	-- common/autotest_common.sh@862 -- # return 0
00:22:15.420   23:55:45	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:15.420  [2024-12-13 23:55:46.086796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:22:15.420  [2024-12-13 23:55:46.087024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:22:15.420  [2024-12-13 23:55:46.087156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:15.420  [2024-12-13 23:55:46.087314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:15.420  [2024-12-13 23:55:46.087442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:15.420  [2024-12-13 23:55:46.087616] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:15.420   23:55:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:15.420    23:55:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:15.420    23:55:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:15.678   23:55:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:15.678    "name": "Existed_Raid",
00:22:15.678    "uuid": "e1345348-3eff-4233-9965-5b5c2553f5ba",
00:22:15.678    "strip_size_kb": 64,
00:22:15.678    "state": "configuring",
00:22:15.678    "raid_level": "raid5f",
00:22:15.678    "superblock": true,
00:22:15.678    "num_base_bdevs": 3,
00:22:15.678    "num_base_bdevs_discovered": 0,
00:22:15.678    "num_base_bdevs_operational": 3,
00:22:15.678    "base_bdevs_list": [
00:22:15.678      {
00:22:15.678        "name": "BaseBdev1",
00:22:15.678        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:15.678        "is_configured": false,
00:22:15.678        "data_offset": 0,
00:22:15.678        "data_size": 0
00:22:15.678      },
00:22:15.678      {
00:22:15.678        "name": "BaseBdev2",
00:22:15.678        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:15.678        "is_configured": false,
00:22:15.678        "data_offset": 0,
00:22:15.678        "data_size": 0
00:22:15.678      },
00:22:15.678      {
00:22:15.678        "name": "BaseBdev3",
00:22:15.678        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:15.678        "is_configured": false,
00:22:15.678        "data_offset": 0,
00:22:15.678        "data_size": 0
00:22:15.678      }
00:22:15.678    ]
00:22:15.678  }'
00:22:15.678   23:55:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:15.678   23:55:46	-- common/autotest_common.sh@10 -- # set +x
00:22:16.244   23:55:46	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:22:16.502  [2024-12-13 23:55:47.086838] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:22:16.502  [2024-12-13 23:55:47.086989] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:22:16.502   23:55:47	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:16.766  [2024-12-13 23:55:47.346945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:22:16.766  [2024-12-13 23:55:47.347144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:22:16.766  [2024-12-13 23:55:47.347255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:16.766  [2024-12-13 23:55:47.347414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:16.766  [2024-12-13 23:55:47.347516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:16.766  [2024-12-13 23:55:47.347599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:16.766   23:55:47	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:22:17.051  [2024-12-13 23:55:47.600870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:17.051  BaseBdev1
00:22:17.051   23:55:47	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:22:17.051   23:55:47	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:22:17.051   23:55:47	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:17.051   23:55:47	-- common/autotest_common.sh@899 -- # local i
00:22:17.051   23:55:47	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:17.051   23:55:47	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:17.051   23:55:47	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:17.326   23:55:47	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:22:17.326  [
00:22:17.326    {
00:22:17.326      "name": "BaseBdev1",
00:22:17.326      "aliases": [
00:22:17.326        "3b7b6ea8-203a-480e-a1b3-3c4c783482f9"
00:22:17.326      ],
00:22:17.326      "product_name": "Malloc disk",
00:22:17.326      "block_size": 512,
00:22:17.326      "num_blocks": 65536,
00:22:17.326      "uuid": "3b7b6ea8-203a-480e-a1b3-3c4c783482f9",
00:22:17.326      "assigned_rate_limits": {
00:22:17.326        "rw_ios_per_sec": 0,
00:22:17.326        "rw_mbytes_per_sec": 0,
00:22:17.326        "r_mbytes_per_sec": 0,
00:22:17.326        "w_mbytes_per_sec": 0
00:22:17.326      },
00:22:17.326      "claimed": true,
00:22:17.326      "claim_type": "exclusive_write",
00:22:17.326      "zoned": false,
00:22:17.326      "supported_io_types": {
00:22:17.326        "read": true,
00:22:17.326        "write": true,
00:22:17.326        "unmap": true,
00:22:17.326        "write_zeroes": true,
00:22:17.326        "flush": true,
00:22:17.326        "reset": true,
00:22:17.326        "compare": false,
00:22:17.326        "compare_and_write": false,
00:22:17.326        "abort": true,
00:22:17.326        "nvme_admin": false,
00:22:17.326        "nvme_io": false
00:22:17.326      },
00:22:17.326      "memory_domains": [
00:22:17.326        {
00:22:17.326          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:17.326          "dma_device_type": 2
00:22:17.326        }
00:22:17.326      ],
00:22:17.326      "driver_specific": {}
00:22:17.326    }
00:22:17.326  ]
00:22:17.584   23:55:48	-- common/autotest_common.sh@905 -- # return 0
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:17.584    23:55:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:17.584    23:55:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:17.584   23:55:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:17.585    "name": "Existed_Raid",
00:22:17.585    "uuid": "37488622-ad62-40e3-9f5e-5b1f9e5f90ea",
00:22:17.585    "strip_size_kb": 64,
00:22:17.585    "state": "configuring",
00:22:17.585    "raid_level": "raid5f",
00:22:17.585    "superblock": true,
00:22:17.585    "num_base_bdevs": 3,
00:22:17.585    "num_base_bdevs_discovered": 1,
00:22:17.585    "num_base_bdevs_operational": 3,
00:22:17.585    "base_bdevs_list": [
00:22:17.585      {
00:22:17.585        "name": "BaseBdev1",
00:22:17.585        "uuid": "3b7b6ea8-203a-480e-a1b3-3c4c783482f9",
00:22:17.585        "is_configured": true,
00:22:17.585        "data_offset": 2048,
00:22:17.585        "data_size": 63488
00:22:17.585      },
00:22:17.585      {
00:22:17.585        "name": "BaseBdev2",
00:22:17.585        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:17.585        "is_configured": false,
00:22:17.585        "data_offset": 0,
00:22:17.585        "data_size": 0
00:22:17.585      },
00:22:17.585      {
00:22:17.585        "name": "BaseBdev3",
00:22:17.585        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:17.585        "is_configured": false,
00:22:17.585        "data_offset": 0,
00:22:17.585        "data_size": 0
00:22:17.585      }
00:22:17.585    ]
00:22:17.585  }'
00:22:17.585   23:55:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:17.585   23:55:48	-- common/autotest_common.sh@10 -- # set +x
00:22:18.151   23:55:48	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:22:18.409  [2024-12-13 23:55:49.065149] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:22:18.409  [2024-12-13 23:55:49.065359] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:22:18.409   23:55:49	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:22:18.409   23:55:49	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:22:18.667   23:55:49	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:22:18.926  BaseBdev1
00:22:19.185   23:55:49	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:22:19.185   23:55:49	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:22:19.185   23:55:49	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:19.185   23:55:49	-- common/autotest_common.sh@899 -- # local i
00:22:19.185   23:55:49	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:19.185   23:55:49	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:19.185   23:55:49	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:19.185   23:55:49	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:22:19.444  [
00:22:19.444    {
00:22:19.444      "name": "BaseBdev1",
00:22:19.444      "aliases": [
00:22:19.444        "2b377966-b3a2-4340-8067-656f2712f209"
00:22:19.444      ],
00:22:19.444      "product_name": "Malloc disk",
00:22:19.444      "block_size": 512,
00:22:19.444      "num_blocks": 65536,
00:22:19.444      "uuid": "2b377966-b3a2-4340-8067-656f2712f209",
00:22:19.444      "assigned_rate_limits": {
00:22:19.444        "rw_ios_per_sec": 0,
00:22:19.444        "rw_mbytes_per_sec": 0,
00:22:19.444        "r_mbytes_per_sec": 0,
00:22:19.444        "w_mbytes_per_sec": 0
00:22:19.444      },
00:22:19.444      "claimed": false,
00:22:19.444      "zoned": false,
00:22:19.444      "supported_io_types": {
00:22:19.444        "read": true,
00:22:19.444        "write": true,
00:22:19.444        "unmap": true,
00:22:19.444        "write_zeroes": true,
00:22:19.444        "flush": true,
00:22:19.444        "reset": true,
00:22:19.444        "compare": false,
00:22:19.444        "compare_and_write": false,
00:22:19.444        "abort": true,
00:22:19.444        "nvme_admin": false,
00:22:19.444        "nvme_io": false
00:22:19.444      },
00:22:19.444      "memory_domains": [
00:22:19.444        {
00:22:19.444          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:19.444          "dma_device_type": 2
00:22:19.444        }
00:22:19.444      ],
00:22:19.444      "driver_specific": {}
00:22:19.444    }
00:22:19.444  ]
00:22:19.444   23:55:50	-- common/autotest_common.sh@905 -- # return 0
00:22:19.444   23:55:50	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:22:19.702  [2024-12-13 23:55:50.276724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:19.702  [2024-12-13 23:55:50.278646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:22:19.702  [2024-12-13 23:55:50.278858] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:22:19.702  [2024-12-13 23:55:50.278969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:22:19.702  [2024-12-13 23:55:50.279102] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:19.702   23:55:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:19.702    23:55:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:19.702    23:55:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:19.961   23:55:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:19.961    "name": "Existed_Raid",
00:22:19.961    "uuid": "aa3d743b-8b45-4f8a-b828-5c246e8bbda3",
00:22:19.961    "strip_size_kb": 64,
00:22:19.961    "state": "configuring",
00:22:19.961    "raid_level": "raid5f",
00:22:19.961    "superblock": true,
00:22:19.961    "num_base_bdevs": 3,
00:22:19.961    "num_base_bdevs_discovered": 1,
00:22:19.961    "num_base_bdevs_operational": 3,
00:22:19.961    "base_bdevs_list": [
00:22:19.961      {
00:22:19.961        "name": "BaseBdev1",
00:22:19.961        "uuid": "2b377966-b3a2-4340-8067-656f2712f209",
00:22:19.961        "is_configured": true,
00:22:19.961        "data_offset": 2048,
00:22:19.961        "data_size": 63488
00:22:19.961      },
00:22:19.961      {
00:22:19.961        "name": "BaseBdev2",
00:22:19.961        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:19.961        "is_configured": false,
00:22:19.961        "data_offset": 0,
00:22:19.961        "data_size": 0
00:22:19.961      },
00:22:19.961      {
00:22:19.961        "name": "BaseBdev3",
00:22:19.961        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:19.961        "is_configured": false,
00:22:19.961        "data_offset": 0,
00:22:19.961        "data_size": 0
00:22:19.961      }
00:22:19.961    ]
00:22:19.961  }'
00:22:19.961   23:55:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:19.961   23:55:50	-- common/autotest_common.sh@10 -- # set +x
00:22:20.528   23:55:51	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:22:20.787  [2024-12-13 23:55:51.436646] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:22:20.787  BaseBdev2
00:22:20.787   23:55:51	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:22:20.787   23:55:51	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:22:20.787   23:55:51	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:20.787   23:55:51	-- common/autotest_common.sh@899 -- # local i
00:22:20.787   23:55:51	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:20.787   23:55:51	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:20.787   23:55:51	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:21.046   23:55:51	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:22:21.305  [
00:22:21.305    {
00:22:21.305      "name": "BaseBdev2",
00:22:21.305      "aliases": [
00:22:21.305        "5b4a2861-50f0-46cc-b3a3-919a3c0a4218"
00:22:21.305      ],
00:22:21.305      "product_name": "Malloc disk",
00:22:21.305      "block_size": 512,
00:22:21.305      "num_blocks": 65536,
00:22:21.305      "uuid": "5b4a2861-50f0-46cc-b3a3-919a3c0a4218",
00:22:21.305      "assigned_rate_limits": {
00:22:21.305        "rw_ios_per_sec": 0,
00:22:21.305        "rw_mbytes_per_sec": 0,
00:22:21.305        "r_mbytes_per_sec": 0,
00:22:21.305        "w_mbytes_per_sec": 0
00:22:21.305      },
00:22:21.305      "claimed": true,
00:22:21.305      "claim_type": "exclusive_write",
00:22:21.305      "zoned": false,
00:22:21.305      "supported_io_types": {
00:22:21.305        "read": true,
00:22:21.305        "write": true,
00:22:21.305        "unmap": true,
00:22:21.305        "write_zeroes": true,
00:22:21.305        "flush": true,
00:22:21.305        "reset": true,
00:22:21.305        "compare": false,
00:22:21.305        "compare_and_write": false,
00:22:21.305        "abort": true,
00:22:21.305        "nvme_admin": false,
00:22:21.305        "nvme_io": false
00:22:21.305      },
00:22:21.305      "memory_domains": [
00:22:21.305        {
00:22:21.305          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:21.305          "dma_device_type": 2
00:22:21.305        }
00:22:21.305      ],
00:22:21.305      "driver_specific": {}
00:22:21.305    }
00:22:21.305  ]
00:22:21.305   23:55:51	-- common/autotest_common.sh@905 -- # return 0
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:21.305   23:55:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:21.305    23:55:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:21.305    23:55:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:21.564   23:55:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:21.564    "name": "Existed_Raid",
00:22:21.564    "uuid": "aa3d743b-8b45-4f8a-b828-5c246e8bbda3",
00:22:21.564    "strip_size_kb": 64,
00:22:21.564    "state": "configuring",
00:22:21.564    "raid_level": "raid5f",
00:22:21.564    "superblock": true,
00:22:21.564    "num_base_bdevs": 3,
00:22:21.564    "num_base_bdevs_discovered": 2,
00:22:21.564    "num_base_bdevs_operational": 3,
00:22:21.564    "base_bdevs_list": [
00:22:21.564      {
00:22:21.564        "name": "BaseBdev1",
00:22:21.564        "uuid": "2b377966-b3a2-4340-8067-656f2712f209",
00:22:21.564        "is_configured": true,
00:22:21.564        "data_offset": 2048,
00:22:21.564        "data_size": 63488
00:22:21.564      },
00:22:21.564      {
00:22:21.564        "name": "BaseBdev2",
00:22:21.564        "uuid": "5b4a2861-50f0-46cc-b3a3-919a3c0a4218",
00:22:21.564        "is_configured": true,
00:22:21.564        "data_offset": 2048,
00:22:21.564        "data_size": 63488
00:22:21.564      },
00:22:21.564      {
00:22:21.564        "name": "BaseBdev3",
00:22:21.564        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:21.564        "is_configured": false,
00:22:21.564        "data_offset": 0,
00:22:21.564        "data_size": 0
00:22:21.564      }
00:22:21.564    ]
00:22:21.564  }'
00:22:21.564   23:55:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:21.564   23:55:52	-- common/autotest_common.sh@10 -- # set +x
00:22:22.132   23:55:52	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:22:22.391  [2024-12-13 23:55:52.948112] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:22:22.391  [2024-12-13 23:55:52.948540] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:22:22.391  [2024-12-13 23:55:52.948715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:22.391  [2024-12-13 23:55:52.948878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:22:22.391  BaseBdev3
00:22:22.391  [2024-12-13 23:55:52.953727] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:22:22.391  [2024-12-13 23:55:52.953890] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:22:22.391  [2024-12-13 23:55:52.954209] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:22.391   23:55:52	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:22:22.391   23:55:52	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:22:22.391   23:55:52	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:22:22.391   23:55:52	-- common/autotest_common.sh@899 -- # local i
00:22:22.391   23:55:52	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:22:22.391   23:55:52	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:22:22.391   23:55:52	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:22:22.650   23:55:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:22:22.909  [
00:22:22.909    {
00:22:22.909      "name": "BaseBdev3",
00:22:22.909      "aliases": [
00:22:22.909        "bdbe5d79-645d-48af-86bc-c5d6913c6264"
00:22:22.909      ],
00:22:22.909      "product_name": "Malloc disk",
00:22:22.909      "block_size": 512,
00:22:22.909      "num_blocks": 65536,
00:22:22.909      "uuid": "bdbe5d79-645d-48af-86bc-c5d6913c6264",
00:22:22.909      "assigned_rate_limits": {
00:22:22.909        "rw_ios_per_sec": 0,
00:22:22.909        "rw_mbytes_per_sec": 0,
00:22:22.909        "r_mbytes_per_sec": 0,
00:22:22.909        "w_mbytes_per_sec": 0
00:22:22.909      },
00:22:22.909      "claimed": true,
00:22:22.909      "claim_type": "exclusive_write",
00:22:22.909      "zoned": false,
00:22:22.909      "supported_io_types": {
00:22:22.909        "read": true,
00:22:22.909        "write": true,
00:22:22.909        "unmap": true,
00:22:22.909        "write_zeroes": true,
00:22:22.909        "flush": true,
00:22:22.909        "reset": true,
00:22:22.909        "compare": false,
00:22:22.909        "compare_and_write": false,
00:22:22.909        "abort": true,
00:22:22.909        "nvme_admin": false,
00:22:22.909        "nvme_io": false
00:22:22.909      },
00:22:22.909      "memory_domains": [
00:22:22.909        {
00:22:22.909          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:22:22.909          "dma_device_type": 2
00:22:22.909        }
00:22:22.909      ],
00:22:22.909      "driver_specific": {}
00:22:22.909    }
00:22:22.909  ]
00:22:22.909   23:55:53	-- common/autotest_common.sh@905 -- # return 0
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:22.909    23:55:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:22.909    23:55:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:22.909    "name": "Existed_Raid",
00:22:22.909    "uuid": "aa3d743b-8b45-4f8a-b828-5c246e8bbda3",
00:22:22.909    "strip_size_kb": 64,
00:22:22.909    "state": "online",
00:22:22.909    "raid_level": "raid5f",
00:22:22.909    "superblock": true,
00:22:22.909    "num_base_bdevs": 3,
00:22:22.909    "num_base_bdevs_discovered": 3,
00:22:22.909    "num_base_bdevs_operational": 3,
00:22:22.909    "base_bdevs_list": [
00:22:22.909      {
00:22:22.909        "name": "BaseBdev1",
00:22:22.909        "uuid": "2b377966-b3a2-4340-8067-656f2712f209",
00:22:22.909        "is_configured": true,
00:22:22.909        "data_offset": 2048,
00:22:22.909        "data_size": 63488
00:22:22.909      },
00:22:22.909      {
00:22:22.909        "name": "BaseBdev2",
00:22:22.909        "uuid": "5b4a2861-50f0-46cc-b3a3-919a3c0a4218",
00:22:22.909        "is_configured": true,
00:22:22.909        "data_offset": 2048,
00:22:22.909        "data_size": 63488
00:22:22.909      },
00:22:22.909      {
00:22:22.909        "name": "BaseBdev3",
00:22:22.909        "uuid": "bdbe5d79-645d-48af-86bc-c5d6913c6264",
00:22:22.909        "is_configured": true,
00:22:22.909        "data_offset": 2048,
00:22:22.909        "data_size": 63488
00:22:22.909      }
00:22:22.909    ]
00:22:22.909  }'
00:22:22.909   23:55:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:22.909   23:55:53	-- common/autotest_common.sh@10 -- # set +x
00:22:23.845   23:55:54	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:22:23.845  [2024-12-13 23:55:54.501656] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:22:24.104   23:55:54	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@196 -- # return 0
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:24.105    23:55:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:24.105    23:55:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:22:24.105   23:55:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:24.105    "name": "Existed_Raid",
00:22:24.105    "uuid": "aa3d743b-8b45-4f8a-b828-5c246e8bbda3",
00:22:24.105    "strip_size_kb": 64,
00:22:24.105    "state": "online",
00:22:24.105    "raid_level": "raid5f",
00:22:24.105    "superblock": true,
00:22:24.105    "num_base_bdevs": 3,
00:22:24.105    "num_base_bdevs_discovered": 2,
00:22:24.105    "num_base_bdevs_operational": 2,
00:22:24.105    "base_bdevs_list": [
00:22:24.105      {
00:22:24.105        "name": null,
00:22:24.105        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:24.105        "is_configured": false,
00:22:24.105        "data_offset": 2048,
00:22:24.105        "data_size": 63488
00:22:24.105      },
00:22:24.105      {
00:22:24.105        "name": "BaseBdev2",
00:22:24.105        "uuid": "5b4a2861-50f0-46cc-b3a3-919a3c0a4218",
00:22:24.105        "is_configured": true,
00:22:24.105        "data_offset": 2048,
00:22:24.105        "data_size": 63488
00:22:24.105      },
00:22:24.105      {
00:22:24.105        "name": "BaseBdev3",
00:22:24.105        "uuid": "bdbe5d79-645d-48af-86bc-c5d6913c6264",
00:22:24.105        "is_configured": true,
00:22:24.105        "data_offset": 2048,
00:22:24.105        "data_size": 63488
00:22:24.105      }
00:22:24.105    ]
00:22:24.105  }'
00:22:24.363   23:55:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:24.363   23:55:54	-- common/autotest_common.sh@10 -- # set +x
00:22:24.930   23:55:55	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:22:24.930   23:55:55	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:24.930    23:55:55	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:24.930    23:55:55	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:25.188   23:55:55	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:25.188   23:55:55	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:25.188   23:55:55	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:22:25.447  [2024-12-13 23:55:55.945834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:22:25.447  [2024-12-13 23:55:55.946021] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:25.447  [2024-12-13 23:55:55.946193] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:25.447   23:55:56	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:25.447   23:55:56	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:25.447    23:55:56	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:25.447    23:55:56	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:25.705   23:55:56	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:25.705   23:55:56	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:25.705   23:55:56	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:22:25.705  [2024-12-13 23:55:56.421335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:22:25.705  [2024-12-13 23:55:56.421578] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:22:25.963   23:55:56	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:25.963   23:55:56	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:25.963    23:55:56	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:25.963    23:55:56	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:22:26.222   23:55:56	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:22:26.222   23:55:56	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:22:26.222   23:55:56	-- bdev/bdev_raid.sh@287 -- # killprocess 127161
00:22:26.222   23:55:56	-- common/autotest_common.sh@936 -- # '[' -z 127161 ']'
00:22:26.222   23:55:56	-- common/autotest_common.sh@940 -- # kill -0 127161
00:22:26.222    23:55:56	-- common/autotest_common.sh@941 -- # uname
00:22:26.222   23:55:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:26.222    23:55:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127161
00:22:26.222  killing process with pid 127161
00:22:26.222   23:55:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:26.222   23:55:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:26.222   23:55:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 127161'
00:22:26.222   23:55:56	-- common/autotest_common.sh@955 -- # kill 127161
00:22:26.222   23:55:56	-- common/autotest_common.sh@960 -- # wait 127161
00:22:26.222  [2024-12-13 23:55:56.719752] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:26.222  [2024-12-13 23:55:56.719869] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:27.159  ************************************
00:22:27.159  END TEST raid5f_state_function_test_sb
00:22:27.159  ************************************
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@289 -- # return 0
00:22:27.159  
00:22:27.159  real	0m12.804s
00:22:27.159  user	0m22.667s
00:22:27.159  sys	0m1.511s
00:22:27.159   23:55:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:27.159   23:55:57	-- common/autotest_common.sh@10 -- # set +x
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3
00:22:27.159   23:55:57	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:22:27.159   23:55:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:27.159   23:55:57	-- common/autotest_common.sh@10 -- # set +x
00:22:27.159  ************************************
00:22:27.159  START TEST raid5f_superblock_test
00:22:27.159  ************************************
00:22:27.159   23:55:57	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']'
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@357 -- # raid_pid=127547
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:22:27.159   23:55:57	-- bdev/bdev_raid.sh@358 -- # waitforlisten 127547 /var/tmp/spdk-raid.sock
00:22:27.159   23:55:57	-- common/autotest_common.sh@829 -- # '[' -z 127547 ']'
00:22:27.159   23:55:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:27.159   23:55:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:27.159   23:55:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:27.159  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:27.159   23:55:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:27.159   23:55:57	-- common/autotest_common.sh@10 -- # set +x
00:22:27.159  [2024-12-13 23:55:57.787653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:27.159  [2024-12-13 23:55:57.788113] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127547 ]
00:22:27.418  [2024-12-13 23:55:57.961756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:27.676  [2024-12-13 23:55:58.175439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:27.677  [2024-12-13 23:55:58.359669] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:28.244   23:55:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:28.244   23:55:58	-- common/autotest_common.sh@862 -- # return 0
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:28.244   23:55:58	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:22:28.244  malloc1
00:22:28.503   23:55:58	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:28.503  [2024-12-13 23:55:59.154996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:28.503  [2024-12-13 23:55:59.155216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:28.503  [2024-12-13 23:55:59.155292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:22:28.503  [2024-12-13 23:55:59.155668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:28.503  [2024-12-13 23:55:59.158061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:28.503  [2024-12-13 23:55:59.158225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:28.503  pt1
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:28.503   23:55:59	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:22:28.762  malloc2
00:22:28.762   23:55:59	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:29.021  [2024-12-13 23:55:59.658026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:29.021  [2024-12-13 23:55:59.658215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:29.021  [2024-12-13 23:55:59.658293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:22:29.021  [2024-12-13 23:55:59.658447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:29.021  [2024-12-13 23:55:59.660673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:29.021  [2024-12-13 23:55:59.660845] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:29.021  pt2
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:29.021   23:55:59	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:22:29.280  malloc3
00:22:29.280   23:55:59	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:29.539  [2024-12-13 23:56:00.062604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:29.539  [2024-12-13 23:56:00.062782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:29.539  [2024-12-13 23:56:00.062865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:22:29.539  [2024-12-13 23:56:00.063002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:29.539  [2024-12-13 23:56:00.065309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:29.539  [2024-12-13 23:56:00.065473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:29.539  pt3
00:22:29.539   23:56:00	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:29.539   23:56:00	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:29.539   23:56:00	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:22:29.539  [2024-12-13 23:56:00.254674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:29.539  [2024-12-13 23:56:00.256690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:29.539  [2024-12-13 23:56:00.256867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:29.539  [2024-12-13 23:56:00.257110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780
00:22:29.539  [2024-12-13 23:56:00.257220] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:29.539  [2024-12-13 23:56:00.257344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:22:29.539  [2024-12-13 23:56:00.261599] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780
00:22:29.539  [2024-12-13 23:56:00.261726] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780
00:22:29.539  [2024-12-13 23:56:00.262008] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:29.797   23:56:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:29.798    23:56:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:29.798    23:56:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:29.798    "name": "raid_bdev1",
00:22:29.798    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:29.798    "strip_size_kb": 64,
00:22:29.798    "state": "online",
00:22:29.798    "raid_level": "raid5f",
00:22:29.798    "superblock": true,
00:22:29.798    "num_base_bdevs": 3,
00:22:29.798    "num_base_bdevs_discovered": 3,
00:22:29.798    "num_base_bdevs_operational": 3,
00:22:29.798    "base_bdevs_list": [
00:22:29.798      {
00:22:29.798        "name": "pt1",
00:22:29.798        "uuid": "bc479e2b-0991-50cd-960b-5c64d879d7b3",
00:22:29.798        "is_configured": true,
00:22:29.798        "data_offset": 2048,
00:22:29.798        "data_size": 63488
00:22:29.798      },
00:22:29.798      {
00:22:29.798        "name": "pt2",
00:22:29.798        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:29.798        "is_configured": true,
00:22:29.798        "data_offset": 2048,
00:22:29.798        "data_size": 63488
00:22:29.798      },
00:22:29.798      {
00:22:29.798        "name": "pt3",
00:22:29.798        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:29.798        "is_configured": true,
00:22:29.798        "data_offset": 2048,
00:22:29.798        "data_size": 63488
00:22:29.798      }
00:22:29.798    ]
00:22:29.798  }'
00:22:29.798   23:56:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:29.798   23:56:00	-- common/autotest_common.sh@10 -- # set +x
00:22:30.733    23:56:01	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:30.733    23:56:01	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:22:30.733  [2024-12-13 23:56:01.263234] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:30.733   23:56:01	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fd730738-1954-4a89-81bd-fa7038d7f33d
00:22:30.733   23:56:01	-- bdev/bdev_raid.sh@380 -- # '[' -z fd730738-1954-4a89-81bd-fa7038d7f33d ']'
00:22:30.733   23:56:01	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:30.733  [2024-12-13 23:56:01.455142] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:30.733  [2024-12-13 23:56:01.455276] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:30.733  [2024-12-13 23:56:01.455458] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:30.733  [2024-12-13 23:56:01.455625] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:30.733  [2024-12-13 23:56:01.455736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline
00:22:30.992    23:56:01	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:30.992    23:56:01	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:22:30.992   23:56:01	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:22:30.992   23:56:01	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:22:30.992   23:56:01	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:30.992   23:56:01	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:22:31.251   23:56:01	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:31.251   23:56:01	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:31.510   23:56:02	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:31.510   23:56:02	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:31.510    23:56:02	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:22:31.510    23:56:02	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:22:31.768   23:56:02	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:22:31.768   23:56:02	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:31.768   23:56:02	-- common/autotest_common.sh@650 -- # local es=0
00:22:31.768   23:56:02	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:31.768   23:56:02	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:31.768   23:56:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:31.768    23:56:02	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:31.768   23:56:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:31.768    23:56:02	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:31.768   23:56:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:31.768   23:56:02	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:31.768   23:56:02	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:22:31.768   23:56:02	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:32.027  [2024-12-13 23:56:02.591320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:22:32.027  [2024-12-13 23:56:02.593363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:22:32.027  [2024-12-13 23:56:02.593520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:22:32.027  [2024-12-13 23:56:02.593690] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:22:32.027  [2024-12-13 23:56:02.593841] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:22:32.027  [2024-12-13 23:56:02.593973] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:22:32.027  [2024-12-13 23:56:02.594120] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:32.027  [2024-12-13 23:56:02.594218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring
00:22:32.027  request:
00:22:32.027  {
00:22:32.027    "name": "raid_bdev1",
00:22:32.027    "raid_level": "raid5f",
00:22:32.027    "base_bdevs": [
00:22:32.027      "malloc1",
00:22:32.027      "malloc2",
00:22:32.027      "malloc3"
00:22:32.027    ],
00:22:32.027    "superblock": false,
00:22:32.027    "strip_size_kb": 64,
00:22:32.027    "method": "bdev_raid_create",
00:22:32.027    "req_id": 1
00:22:32.027  }
00:22:32.027  Got JSON-RPC error response
00:22:32.027  response:
00:22:32.027  {
00:22:32.027    "code": -17,
00:22:32.027    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:22:32.027  }
00:22:32.027   23:56:02	-- common/autotest_common.sh@653 -- # es=1
00:22:32.027   23:56:02	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:22:32.027   23:56:02	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:22:32.027   23:56:02	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:22:32.027    23:56:02	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:32.027    23:56:02	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:22:32.286   23:56:02	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:22:32.286   23:56:02	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:22:32.286   23:56:02	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:32.545  [2024-12-13 23:56:03.079372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:32.545  [2024-12-13 23:56:03.079558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:32.545  [2024-12-13 23:56:03.079631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:22:32.545  [2024-12-13 23:56:03.079750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:32.545  [2024-12-13 23:56:03.082016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:32.545  [2024-12-13 23:56:03.082182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:32.545  [2024-12-13 23:56:03.082378] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:22:32.545  [2024-12-13 23:56:03.082529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:32.545  pt1
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:32.545   23:56:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:32.545    23:56:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:32.545    23:56:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:32.804   23:56:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:32.804    "name": "raid_bdev1",
00:22:32.804    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:32.804    "strip_size_kb": 64,
00:22:32.804    "state": "configuring",
00:22:32.804    "raid_level": "raid5f",
00:22:32.804    "superblock": true,
00:22:32.804    "num_base_bdevs": 3,
00:22:32.804    "num_base_bdevs_discovered": 1,
00:22:32.804    "num_base_bdevs_operational": 3,
00:22:32.804    "base_bdevs_list": [
00:22:32.804      {
00:22:32.804        "name": "pt1",
00:22:32.804        "uuid": "bc479e2b-0991-50cd-960b-5c64d879d7b3",
00:22:32.804        "is_configured": true,
00:22:32.804        "data_offset": 2048,
00:22:32.804        "data_size": 63488
00:22:32.804      },
00:22:32.804      {
00:22:32.804        "name": null,
00:22:32.804        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:32.804        "is_configured": false,
00:22:32.804        "data_offset": 2048,
00:22:32.804        "data_size": 63488
00:22:32.804      },
00:22:32.804      {
00:22:32.804        "name": null,
00:22:32.804        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:32.804        "is_configured": false,
00:22:32.804        "data_offset": 2048,
00:22:32.804        "data_size": 63488
00:22:32.804      }
00:22:32.804    ]
00:22:32.804  }'
00:22:32.804   23:56:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:32.804   23:56:03	-- common/autotest_common.sh@10 -- # set +x
00:22:33.371   23:56:03	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:22:33.371   23:56:03	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:33.371  [2024-12-13 23:56:04.083579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:33.371  [2024-12-13 23:56:04.083779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:33.371  [2024-12-13 23:56:04.083865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:22:33.371  [2024-12-13 23:56:04.084033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:33.371  [2024-12-13 23:56:04.084451] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:33.371  [2024-12-13 23:56:04.084601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:33.371  [2024-12-13 23:56:04.084800] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:33.371  [2024-12-13 23:56:04.084921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:33.371  pt2
00:22:33.371   23:56:04	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:33.630  [2024-12-13 23:56:04.323642] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:33.630   23:56:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:33.630    23:56:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:33.630    23:56:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:33.889   23:56:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:33.889    "name": "raid_bdev1",
00:22:33.889    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:33.889    "strip_size_kb": 64,
00:22:33.889    "state": "configuring",
00:22:33.889    "raid_level": "raid5f",
00:22:33.889    "superblock": true,
00:22:33.889    "num_base_bdevs": 3,
00:22:33.889    "num_base_bdevs_discovered": 1,
00:22:33.889    "num_base_bdevs_operational": 3,
00:22:33.889    "base_bdevs_list": [
00:22:33.889      {
00:22:33.889        "name": "pt1",
00:22:33.889        "uuid": "bc479e2b-0991-50cd-960b-5c64d879d7b3",
00:22:33.889        "is_configured": true,
00:22:33.889        "data_offset": 2048,
00:22:33.889        "data_size": 63488
00:22:33.889      },
00:22:33.889      {
00:22:33.889        "name": null,
00:22:33.889        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:33.889        "is_configured": false,
00:22:33.889        "data_offset": 2048,
00:22:33.889        "data_size": 63488
00:22:33.889      },
00:22:33.889      {
00:22:33.889        "name": null,
00:22:33.889        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:33.889        "is_configured": false,
00:22:33.889        "data_offset": 2048,
00:22:33.889        "data_size": 63488
00:22:33.889      }
00:22:33.889    ]
00:22:33.889  }'
00:22:33.889   23:56:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:33.889   23:56:04	-- common/autotest_common.sh@10 -- # set +x
00:22:34.456   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:22:34.456   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:34.456   23:56:05	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:34.714  [2024-12-13 23:56:05.295792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:34.714  [2024-12-13 23:56:05.295988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:34.714  [2024-12-13 23:56:05.296055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:22:34.714  [2024-12-13 23:56:05.296169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:34.714  [2024-12-13 23:56:05.296570] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:34.714  [2024-12-13 23:56:05.296725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:34.714  [2024-12-13 23:56:05.296909] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:34.714  [2024-12-13 23:56:05.297023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:34.714  pt2
00:22:34.714   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:22:34.714   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:34.714   23:56:05	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:34.973  [2024-12-13 23:56:05.527847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:34.973  [2024-12-13 23:56:05.528031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:34.973  [2024-12-13 23:56:05.528097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:22:34.973  [2024-12-13 23:56:05.528209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:34.973  [2024-12-13 23:56:05.528614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:34.973  [2024-12-13 23:56:05.528773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:34.973  [2024-12-13 23:56:05.528973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:34.973  [2024-12-13 23:56:05.529087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:34.973  [2024-12-13 23:56:05.529245] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:22:34.973  [2024-12-13 23:56:05.529340] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:34.973  [2024-12-13 23:56:05.529481] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:22:34.973  [2024-12-13 23:56:05.533625] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:22:34.973  [2024-12-13 23:56:05.533757] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:22:34.973  [2024-12-13 23:56:05.534009] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:34.973  pt3
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:34.973   23:56:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:34.973    23:56:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:34.973    23:56:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:35.231   23:56:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:35.231    "name": "raid_bdev1",
00:22:35.231    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:35.231    "strip_size_kb": 64,
00:22:35.231    "state": "online",
00:22:35.231    "raid_level": "raid5f",
00:22:35.231    "superblock": true,
00:22:35.232    "num_base_bdevs": 3,
00:22:35.232    "num_base_bdevs_discovered": 3,
00:22:35.232    "num_base_bdevs_operational": 3,
00:22:35.232    "base_bdevs_list": [
00:22:35.232      {
00:22:35.232        "name": "pt1",
00:22:35.232        "uuid": "bc479e2b-0991-50cd-960b-5c64d879d7b3",
00:22:35.232        "is_configured": true,
00:22:35.232        "data_offset": 2048,
00:22:35.232        "data_size": 63488
00:22:35.232      },
00:22:35.232      {
00:22:35.232        "name": "pt2",
00:22:35.232        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:35.232        "is_configured": true,
00:22:35.232        "data_offset": 2048,
00:22:35.232        "data_size": 63488
00:22:35.232      },
00:22:35.232      {
00:22:35.232        "name": "pt3",
00:22:35.232        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:35.232        "is_configured": true,
00:22:35.232        "data_offset": 2048,
00:22:35.232        "data_size": 63488
00:22:35.232      }
00:22:35.232    ]
00:22:35.232  }'
00:22:35.232   23:56:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:35.232   23:56:05	-- common/autotest_common.sh@10 -- # set +x
00:22:35.798    23:56:06	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:35.798    23:56:06	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:22:36.057  [2024-12-13 23:56:06.590838] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:36.057   23:56:06	-- bdev/bdev_raid.sh@430 -- # '[' fd730738-1954-4a89-81bd-fa7038d7f33d '!=' fd730738-1954-4a89-81bd-fa7038d7f33d ']'
00:22:36.057   23:56:06	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f
00:22:36.057   23:56:06	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:22:36.057   23:56:06	-- bdev/bdev_raid.sh@196 -- # return 0
00:22:36.057   23:56:06	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:22:36.316  [2024-12-13 23:56:06.834774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:36.316   23:56:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:36.316    23:56:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:36.316    23:56:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:36.574   23:56:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:36.574    "name": "raid_bdev1",
00:22:36.574    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:36.574    "strip_size_kb": 64,
00:22:36.574    "state": "online",
00:22:36.574    "raid_level": "raid5f",
00:22:36.574    "superblock": true,
00:22:36.574    "num_base_bdevs": 3,
00:22:36.574    "num_base_bdevs_discovered": 2,
00:22:36.574    "num_base_bdevs_operational": 2,
00:22:36.574    "base_bdevs_list": [
00:22:36.574      {
00:22:36.574        "name": null,
00:22:36.574        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:36.574        "is_configured": false,
00:22:36.574        "data_offset": 2048,
00:22:36.574        "data_size": 63488
00:22:36.574      },
00:22:36.574      {
00:22:36.574        "name": "pt2",
00:22:36.574        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:36.574        "is_configured": true,
00:22:36.574        "data_offset": 2048,
00:22:36.574        "data_size": 63488
00:22:36.574      },
00:22:36.574      {
00:22:36.574        "name": "pt3",
00:22:36.574        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:36.574        "is_configured": true,
00:22:36.574        "data_offset": 2048,
00:22:36.574        "data_size": 63488
00:22:36.574      }
00:22:36.575    ]
00:22:36.575  }'
00:22:36.575   23:56:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:36.575   23:56:07	-- common/autotest_common.sh@10 -- # set +x
00:22:37.142   23:56:07	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:37.142  [2024-12-13 23:56:07.870918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:37.142  [2024-12-13 23:56:07.871067] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:37.142  [2024-12-13 23:56:07.871206] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:37.142  [2024-12-13 23:56:07.871296] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:37.142  [2024-12-13 23:56:07.871499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:22:37.400    23:56:07	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:37.400    23:56:07	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:22:37.400   23:56:08	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:22:37.400   23:56:08	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:22:37.400   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:22:37.400   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:37.400   23:56:08	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:37.659   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:22:37.659   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:37.659   23:56:08	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:37.918   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:22:37.918   23:56:08	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:37.918   23:56:08	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:22:37.918   23:56:08	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:22:37.918   23:56:08	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:38.176  [2024-12-13 23:56:08.698692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:38.176  [2024-12-13 23:56:08.698888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:38.176  [2024-12-13 23:56:08.698959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:22:38.176  [2024-12-13 23:56:08.699094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:38.176  [2024-12-13 23:56:08.700951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:38.176  [2024-12-13 23:56:08.701115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:38.176  [2024-12-13 23:56:08.701311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:38.176  [2024-12-13 23:56:08.701463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:38.176  pt2
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:38.176    23:56:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:38.176    23:56:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:38.176   23:56:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:38.176    "name": "raid_bdev1",
00:22:38.176    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:38.176    "strip_size_kb": 64,
00:22:38.176    "state": "configuring",
00:22:38.176    "raid_level": "raid5f",
00:22:38.176    "superblock": true,
00:22:38.177    "num_base_bdevs": 3,
00:22:38.177    "num_base_bdevs_discovered": 1,
00:22:38.177    "num_base_bdevs_operational": 2,
00:22:38.177    "base_bdevs_list": [
00:22:38.177      {
00:22:38.177        "name": null,
00:22:38.177        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:38.177        "is_configured": false,
00:22:38.177        "data_offset": 2048,
00:22:38.177        "data_size": 63488
00:22:38.177      },
00:22:38.177      {
00:22:38.177        "name": "pt2",
00:22:38.177        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:38.177        "is_configured": true,
00:22:38.177        "data_offset": 2048,
00:22:38.177        "data_size": 63488
00:22:38.177      },
00:22:38.177      {
00:22:38.177        "name": null,
00:22:38.177        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:38.177        "is_configured": false,
00:22:38.177        "data_offset": 2048,
00:22:38.177        "data_size": 63488
00:22:38.177      }
00:22:38.177    ]
00:22:38.177  }'
00:22:38.177   23:56:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:38.177   23:56:08	-- common/autotest_common.sh@10 -- # set +x
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@462 -- # i=2
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:39.148  [2024-12-13 23:56:09.774872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:39.148  [2024-12-13 23:56:09.775057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:39.148  [2024-12-13 23:56:09.775125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:22:39.148  [2024-12-13 23:56:09.775285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:39.148  [2024-12-13 23:56:09.775723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:39.148  [2024-12-13 23:56:09.775873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:39.148  [2024-12-13 23:56:09.776101] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:39.148  [2024-12-13 23:56:09.776230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:39.148  [2024-12-13 23:56:09.776436] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80
00:22:39.148  [2024-12-13 23:56:09.776544] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:39.148  [2024-12-13 23:56:09.776654] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:22:39.148  [2024-12-13 23:56:09.780614] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80
00:22:39.148  [2024-12-13 23:56:09.780751] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80
00:22:39.148  [2024-12-13 23:56:09.781070] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:39.148  pt3
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:39.148   23:56:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:39.148    23:56:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:39.148    23:56:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:39.406   23:56:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:39.406    "name": "raid_bdev1",
00:22:39.406    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:39.406    "strip_size_kb": 64,
00:22:39.406    "state": "online",
00:22:39.406    "raid_level": "raid5f",
00:22:39.406    "superblock": true,
00:22:39.406    "num_base_bdevs": 3,
00:22:39.406    "num_base_bdevs_discovered": 2,
00:22:39.406    "num_base_bdevs_operational": 2,
00:22:39.406    "base_bdevs_list": [
00:22:39.406      {
00:22:39.406        "name": null,
00:22:39.406        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:39.406        "is_configured": false,
00:22:39.406        "data_offset": 2048,
00:22:39.406        "data_size": 63488
00:22:39.406      },
00:22:39.406      {
00:22:39.406        "name": "pt2",
00:22:39.406        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:39.406        "is_configured": true,
00:22:39.406        "data_offset": 2048,
00:22:39.406        "data_size": 63488
00:22:39.406      },
00:22:39.406      {
00:22:39.406        "name": "pt3",
00:22:39.406        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:39.406        "is_configured": true,
00:22:39.406        "data_offset": 2048,
00:22:39.406        "data_size": 63488
00:22:39.406      }
00:22:39.406    ]
00:22:39.406  }'
00:22:39.406   23:56:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:39.406   23:56:09	-- common/autotest_common.sh@10 -- # set +x
00:22:39.973   23:56:10	-- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']'
00:22:39.973   23:56:10	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:40.231  [2024-12-13 23:56:10.776298] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:40.231  [2024-12-13 23:56:10.776466] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:40.231  [2024-12-13 23:56:10.776639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:40.231  [2024-12-13 23:56:10.776838] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:40.231  [2024-12-13 23:56:10.776969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline
00:22:40.231    23:56:10	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:40.231    23:56:10	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:22:40.489   23:56:11	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:22:40.489   23:56:11	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:22:40.489   23:56:11	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:40.489  [2024-12-13 23:56:11.220374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:40.489  [2024-12-13 23:56:11.220589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:40.489  [2024-12-13 23:56:11.220773] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:22:40.489  [2024-12-13 23:56:11.220933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:40.747  [2024-12-13 23:56:11.223556] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:40.747  [2024-12-13 23:56:11.223739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:40.747  [2024-12-13 23:56:11.224021] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:22:40.747  [2024-12-13 23:56:11.224203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:40.747  pt1
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:40.747   23:56:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:40.747    23:56:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:40.747    23:56:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:41.006   23:56:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:41.006    "name": "raid_bdev1",
00:22:41.006    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:41.006    "strip_size_kb": 64,
00:22:41.006    "state": "configuring",
00:22:41.006    "raid_level": "raid5f",
00:22:41.006    "superblock": true,
00:22:41.006    "num_base_bdevs": 3,
00:22:41.006    "num_base_bdevs_discovered": 1,
00:22:41.006    "num_base_bdevs_operational": 3,
00:22:41.006    "base_bdevs_list": [
00:22:41.006      {
00:22:41.006        "name": "pt1",
00:22:41.006        "uuid": "bc479e2b-0991-50cd-960b-5c64d879d7b3",
00:22:41.006        "is_configured": true,
00:22:41.006        "data_offset": 2048,
00:22:41.006        "data_size": 63488
00:22:41.006      },
00:22:41.006      {
00:22:41.006        "name": null,
00:22:41.006        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:41.006        "is_configured": false,
00:22:41.006        "data_offset": 2048,
00:22:41.006        "data_size": 63488
00:22:41.006      },
00:22:41.006      {
00:22:41.006        "name": null,
00:22:41.006        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:41.006        "is_configured": false,
00:22:41.006        "data_offset": 2048,
00:22:41.006        "data_size": 63488
00:22:41.006      }
00:22:41.006    ]
00:22:41.006  }'
00:22:41.006   23:56:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:41.006   23:56:11	-- common/autotest_common.sh@10 -- # set +x
00:22:41.573   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:22:41.573   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:41.573   23:56:12	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:41.832   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:22:41.832   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:41.832   23:56:12	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:42.090   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:22:42.090   23:56:12	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:42.090   23:56:12	-- bdev/bdev_raid.sh@489 -- # i=2
00:22:42.090   23:56:12	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:42.348  [2024-12-13 23:56:12.840800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:42.348  [2024-12-13 23:56:12.841021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:42.348  [2024-12-13 23:56:12.841166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:22:42.348  [2024-12-13 23:56:12.841305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:42.348  [2024-12-13 23:56:12.841903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:42.348  [2024-12-13 23:56:12.842078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:42.348  [2024-12-13 23:56:12.842290] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:42.348  [2024-12-13 23:56:12.842406] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2)
00:22:42.348  [2024-12-13 23:56:12.842506] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:42.348  [2024-12-13 23:56:12.842569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring
00:22:42.348  [2024-12-13 23:56:12.842765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:42.348  pt3
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:42.348   23:56:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:42.348    23:56:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:42.348    23:56:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:42.348   23:56:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:42.348    "name": "raid_bdev1",
00:22:42.348    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:42.348    "strip_size_kb": 64,
00:22:42.348    "state": "configuring",
00:22:42.348    "raid_level": "raid5f",
00:22:42.348    "superblock": true,
00:22:42.348    "num_base_bdevs": 3,
00:22:42.348    "num_base_bdevs_discovered": 1,
00:22:42.348    "num_base_bdevs_operational": 2,
00:22:42.348    "base_bdevs_list": [
00:22:42.348      {
00:22:42.348        "name": null,
00:22:42.348        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:42.348        "is_configured": false,
00:22:42.348        "data_offset": 2048,
00:22:42.348        "data_size": 63488
00:22:42.348      },
00:22:42.348      {
00:22:42.348        "name": null,
00:22:42.348        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:42.348        "is_configured": false,
00:22:42.348        "data_offset": 2048,
00:22:42.348        "data_size": 63488
00:22:42.348      },
00:22:42.348      {
00:22:42.348        "name": "pt3",
00:22:42.348        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:42.348        "is_configured": true,
00:22:42.348        "data_offset": 2048,
00:22:42.348        "data_size": 63488
00:22:42.348      }
00:22:42.348    ]
00:22:42.348  }'
00:22:42.348   23:56:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:42.348   23:56:13	-- common/autotest_common.sh@10 -- # set +x
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:43.282  [2024-12-13 23:56:13.921003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:43.282  [2024-12-13 23:56:13.921210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:43.282  [2024-12-13 23:56:13.921280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:22:43.282  [2024-12-13 23:56:13.921462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:43.282  [2024-12-13 23:56:13.921952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:43.282  [2024-12-13 23:56:13.922116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:43.282  [2024-12-13 23:56:13.922311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:43.282  [2024-12-13 23:56:13.922462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:43.282  [2024-12-13 23:56:13.922696] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80
00:22:43.282  [2024-12-13 23:56:13.922811] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:43.282  [2024-12-13 23:56:13.922939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0
00:22:43.282  [2024-12-13 23:56:13.927018] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80
00:22:43.282  [2024-12-13 23:56:13.927166] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80
00:22:43.282  [2024-12-13 23:56:13.927497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:43.282  pt2
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:43.282   23:56:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:43.282    23:56:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:43.283    23:56:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:43.541   23:56:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:43.541    "name": "raid_bdev1",
00:22:43.541    "uuid": "fd730738-1954-4a89-81bd-fa7038d7f33d",
00:22:43.541    "strip_size_kb": 64,
00:22:43.541    "state": "online",
00:22:43.541    "raid_level": "raid5f",
00:22:43.541    "superblock": true,
00:22:43.541    "num_base_bdevs": 3,
00:22:43.541    "num_base_bdevs_discovered": 2,
00:22:43.541    "num_base_bdevs_operational": 2,
00:22:43.541    "base_bdevs_list": [
00:22:43.541      {
00:22:43.541        "name": null,
00:22:43.541        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:43.541        "is_configured": false,
00:22:43.541        "data_offset": 2048,
00:22:43.541        "data_size": 63488
00:22:43.541      },
00:22:43.541      {
00:22:43.541        "name": "pt2",
00:22:43.541        "uuid": "a527b5d1-cce8-548d-b5c9-4d5884c93d7e",
00:22:43.541        "is_configured": true,
00:22:43.541        "data_offset": 2048,
00:22:43.541        "data_size": 63488
00:22:43.541      },
00:22:43.541      {
00:22:43.541        "name": "pt3",
00:22:43.541        "uuid": "59d1363a-225d-55d6-a606-5d4cdc4236ac",
00:22:43.541        "is_configured": true,
00:22:43.541        "data_offset": 2048,
00:22:43.541        "data_size": 63488
00:22:43.541      }
00:22:43.541    ]
00:22:43.541  }'
00:22:43.541   23:56:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:43.541   23:56:14	-- common/autotest_common.sh@10 -- # set +x
00:22:44.108    23:56:14	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:44.108    23:56:14	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:22:44.366  [2024-12-13 23:56:15.084751] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:44.625   23:56:15	-- bdev/bdev_raid.sh@506 -- # '[' fd730738-1954-4a89-81bd-fa7038d7f33d '!=' fd730738-1954-4a89-81bd-fa7038d7f33d ']'
00:22:44.625   23:56:15	-- bdev/bdev_raid.sh@511 -- # killprocess 127547
00:22:44.625   23:56:15	-- common/autotest_common.sh@936 -- # '[' -z 127547 ']'
00:22:44.625   23:56:15	-- common/autotest_common.sh@940 -- # kill -0 127547
00:22:44.625    23:56:15	-- common/autotest_common.sh@941 -- # uname
00:22:44.625   23:56:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:44.625    23:56:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127547
00:22:44.625   23:56:15	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:44.625   23:56:15	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:44.625   23:56:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 127547'
00:22:44.625  killing process with pid 127547
00:22:44.625   23:56:15	-- common/autotest_common.sh@955 -- # kill 127547
00:22:44.625  [2024-12-13 23:56:15.126789] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:44.625   23:56:15	-- common/autotest_common.sh@960 -- # wait 127547
00:22:44.625  [2024-12-13 23:56:15.127005] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:44.625  [2024-12-13 23:56:15.127214] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:44.625  [2024-12-13 23:56:15.127324] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline
00:22:44.625  [2024-12-13 23:56:15.318022] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:45.560   23:56:16	-- bdev/bdev_raid.sh@513 -- # return 0
00:22:45.560  
00:22:45.560  real	0m18.525s
00:22:45.560  user	0m33.860s
00:22:45.560  sys	0m2.296s
00:22:45.560   23:56:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:45.560   23:56:16	-- common/autotest_common.sh@10 -- # set +x
00:22:45.560  ************************************
00:22:45.560  END TEST raid5f_superblock_test
00:22:45.560  ************************************
00:22:45.560   23:56:16	-- bdev/bdev_raid.sh@747 -- # '[' true = true ']'
00:22:45.560   23:56:16	-- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false
00:22:45.560   23:56:16	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:22:45.560   23:56:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:45.560   23:56:16	-- common/autotest_common.sh@10 -- # set +x
00:22:45.819  ************************************
00:22:45.819  START TEST raid5f_rebuild_test
00:22:45.819  ************************************
00:22:45.819   23:56:16	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:45.819    23:56:16	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@544 -- # raid_pid=128140
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@545 -- # waitforlisten 128140 /var/tmp/spdk-raid.sock
00:22:45.819   23:56:16	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:22:45.819   23:56:16	-- common/autotest_common.sh@829 -- # '[' -z 128140 ']'
00:22:45.819   23:56:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:45.819   23:56:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:45.819   23:56:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:45.819  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:45.819   23:56:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:45.819   23:56:16	-- common/autotest_common.sh@10 -- # set +x
00:22:45.819  [2024-12-13 23:56:16.380880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:45.819  [2024-12-13 23:56:16.381234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128140 ]
00:22:45.819  I/O size of 3145728 is greater than zero copy threshold (65536).
00:22:45.819  Zero copy mechanism will not be used.
00:22:45.819  [2024-12-13 23:56:16.551624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:46.078  [2024-12-13 23:56:16.763981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:46.336  [2024-12-13 23:56:16.953787] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:46.902   23:56:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:46.902   23:56:17	-- common/autotest_common.sh@862 -- # return 0
00:22:46.902   23:56:17	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:46.902   23:56:17	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:46.902   23:56:17	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:22:46.903  BaseBdev1
00:22:46.903   23:56:17	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:46.903   23:56:17	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:46.903   23:56:17	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:22:47.161  BaseBdev2
00:22:47.420   23:56:17	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:47.420   23:56:17	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:47.420   23:56:17	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:22:47.420  BaseBdev3
00:22:47.420   23:56:18	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:22:47.678  spare_malloc
00:22:47.678   23:56:18	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:22:47.936  spare_delay
00:22:47.936   23:56:18	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:22:48.195  [2024-12-13 23:56:18.787237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:22:48.195  [2024-12-13 23:56:18.787514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:48.195  [2024-12-13 23:56:18.787595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:22:48.195  [2024-12-13 23:56:18.787870] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:48.195  [2024-12-13 23:56:18.790236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:48.195  [2024-12-13 23:56:18.790415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:22:48.195  spare
00:22:48.195   23:56:18	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1
00:22:48.454  [2024-12-13 23:56:18.971313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:48.454  [2024-12-13 23:56:18.973357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:22:48.454  [2024-12-13 23:56:18.973529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:22:48.454  [2024-12-13 23:56:18.973695] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780
00:22:48.454  [2024-12-13 23:56:18.973760] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:22:48.454  [2024-12-13 23:56:18.974003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:22:48.454  [2024-12-13 23:56:18.978453] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780
00:22:48.454  [2024-12-13 23:56:18.978602] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780
00:22:48.454  [2024-12-13 23:56:18.978895] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:48.454   23:56:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:48.454    23:56:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:48.454    23:56:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:48.454   23:56:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:48.454    "name": "raid_bdev1",
00:22:48.454    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:48.454    "strip_size_kb": 64,
00:22:48.454    "state": "online",
00:22:48.454    "raid_level": "raid5f",
00:22:48.454    "superblock": false,
00:22:48.454    "num_base_bdevs": 3,
00:22:48.454    "num_base_bdevs_discovered": 3,
00:22:48.454    "num_base_bdevs_operational": 3,
00:22:48.454    "base_bdevs_list": [
00:22:48.454      {
00:22:48.454        "name": "BaseBdev1",
00:22:48.454        "uuid": "74072f84-e108-48f4-968b-fb2361058ba1",
00:22:48.454        "is_configured": true,
00:22:48.454        "data_offset": 0,
00:22:48.454        "data_size": 65536
00:22:48.454      },
00:22:48.454      {
00:22:48.454        "name": "BaseBdev2",
00:22:48.454        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:48.454        "is_configured": true,
00:22:48.454        "data_offset": 0,
00:22:48.454        "data_size": 65536
00:22:48.454      },
00:22:48.454      {
00:22:48.454        "name": "BaseBdev3",
00:22:48.454        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:48.454        "is_configured": true,
00:22:48.454        "data_offset": 0,
00:22:48.454        "data_size": 65536
00:22:48.454      }
00:22:48.454    ]
00:22:48.454  }'
00:22:48.454   23:56:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:48.454   23:56:19	-- common/autotest_common.sh@10 -- # set +x
00:22:49.388    23:56:19	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:49.388    23:56:19	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:22:49.388  [2024-12-13 23:56:20.032313] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:49.388   23:56:20	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072
00:22:49.388    23:56:20	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:22:49.388    23:56:20	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:49.647   23:56:20	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:22:49.647   23:56:20	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:22:49.647   23:56:20	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:22:49.647   23:56:20	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@12 -- # local i
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:49.647   23:56:20	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:22:49.905  [2024-12-13 23:56:20.456295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:22:49.905  /dev/nbd0
00:22:49.905    23:56:20	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:22:49.905   23:56:20	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:22:49.905   23:56:20	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:22:49.905   23:56:20	-- common/autotest_common.sh@867 -- # local i
00:22:49.905   23:56:20	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:49.905   23:56:20	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:49.905   23:56:20	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:22:49.905   23:56:20	-- common/autotest_common.sh@871 -- # break
00:22:49.905   23:56:20	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:49.905   23:56:20	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:49.905   23:56:20	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:49.905  1+0 records in
00:22:49.905  1+0 records out
00:22:49.905  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416013 s, 9.8 MB/s
00:22:49.905    23:56:20	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:49.905   23:56:20	-- common/autotest_common.sh@884 -- # size=4096
00:22:49.905   23:56:20	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:49.905   23:56:20	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:49.905   23:56:20	-- common/autotest_common.sh@887 -- # return 0
00:22:49.905   23:56:20	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:49.905   23:56:20	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:49.905   23:56:20	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:22:49.905   23:56:20	-- bdev/bdev_raid.sh@581 -- # write_unit_size=256
00:22:49.905   23:56:20	-- bdev/bdev_raid.sh@582 -- # echo 128
00:22:49.905   23:56:20	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct
00:22:50.472  512+0 records in
00:22:50.472  512+0 records out
00:22:50.472  67108864 bytes (67 MB, 64 MiB) copied, 0.420685 s, 160 MB/s
00:22:50.472   23:56:20	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@51 -- # local i
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:22:50.472   23:56:20	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:22:50.472  [2024-12-13 23:56:21.153721] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:50.472    23:56:21	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@41 -- # break
00:22:50.472   23:56:21	-- bdev/nbd_common.sh@45 -- # return 0
00:22:50.472   23:56:21	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:22:50.731  [2024-12-13 23:56:21.332175] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:50.731   23:56:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:50.731    23:56:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:50.731    23:56:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:50.990   23:56:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:50.990    "name": "raid_bdev1",
00:22:50.990    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:50.990    "strip_size_kb": 64,
00:22:50.990    "state": "online",
00:22:50.990    "raid_level": "raid5f",
00:22:50.990    "superblock": false,
00:22:50.990    "num_base_bdevs": 3,
00:22:50.990    "num_base_bdevs_discovered": 2,
00:22:50.990    "num_base_bdevs_operational": 2,
00:22:50.990    "base_bdevs_list": [
00:22:50.990      {
00:22:50.990        "name": null,
00:22:50.990        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:50.990        "is_configured": false,
00:22:50.990        "data_offset": 0,
00:22:50.990        "data_size": 65536
00:22:50.990      },
00:22:50.990      {
00:22:50.990        "name": "BaseBdev2",
00:22:50.990        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:50.990        "is_configured": true,
00:22:50.990        "data_offset": 0,
00:22:50.990        "data_size": 65536
00:22:50.990      },
00:22:50.990      {
00:22:50.990        "name": "BaseBdev3",
00:22:50.990        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:50.990        "is_configured": true,
00:22:50.990        "data_offset": 0,
00:22:50.990        "data_size": 65536
00:22:50.990      }
00:22:50.990    ]
00:22:50.990  }'
00:22:50.990   23:56:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:50.990   23:56:21	-- common/autotest_common.sh@10 -- # set +x
00:22:51.558   23:56:22	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:51.816  [2024-12-13 23:56:22.432359] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:51.816  [2024-12-13 23:56:22.432532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:51.816  [2024-12-13 23:56:22.443995] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000
00:22:51.816  [2024-12-13 23:56:22.449951] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:51.816   23:56:22	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:22:52.750   23:56:23	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:52.750   23:56:23	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:52.750   23:56:23	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:52.750   23:56:23	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:52.750   23:56:23	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:52.750    23:56:23	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:52.750    23:56:23	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:53.008   23:56:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:53.008    "name": "raid_bdev1",
00:22:53.008    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:53.008    "strip_size_kb": 64,
00:22:53.008    "state": "online",
00:22:53.008    "raid_level": "raid5f",
00:22:53.008    "superblock": false,
00:22:53.008    "num_base_bdevs": 3,
00:22:53.008    "num_base_bdevs_discovered": 3,
00:22:53.008    "num_base_bdevs_operational": 3,
00:22:53.008    "process": {
00:22:53.008      "type": "rebuild",
00:22:53.008      "target": "spare",
00:22:53.008      "progress": {
00:22:53.008        "blocks": 24576,
00:22:53.008        "percent": 18
00:22:53.008      }
00:22:53.008    },
00:22:53.008    "base_bdevs_list": [
00:22:53.008      {
00:22:53.008        "name": "spare",
00:22:53.008        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:22:53.008        "is_configured": true,
00:22:53.008        "data_offset": 0,
00:22:53.008        "data_size": 65536
00:22:53.008      },
00:22:53.008      {
00:22:53.008        "name": "BaseBdev2",
00:22:53.008        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:53.008        "is_configured": true,
00:22:53.008        "data_offset": 0,
00:22:53.008        "data_size": 65536
00:22:53.008      },
00:22:53.008      {
00:22:53.008        "name": "BaseBdev3",
00:22:53.008        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:53.008        "is_configured": true,
00:22:53.008        "data_offset": 0,
00:22:53.009        "data_size": 65536
00:22:53.009      }
00:22:53.009    ]
00:22:53.009  }'
00:22:53.009    23:56:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:53.268   23:56:23	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:53.268    23:56:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:53.268   23:56:23	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:53.268   23:56:23	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:22:53.526  [2024-12-13 23:56:24.015523] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:22:53.526  [2024-12-13 23:56:24.063561] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:22:53.526  [2024-12-13 23:56:24.063825] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:53.526   23:56:24	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:53.526   23:56:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:53.526   23:56:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:53.526   23:56:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:53.527   23:56:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:53.527    23:56:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:53.527    23:56:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:53.785   23:56:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:53.785    "name": "raid_bdev1",
00:22:53.785    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:53.785    "strip_size_kb": 64,
00:22:53.785    "state": "online",
00:22:53.785    "raid_level": "raid5f",
00:22:53.786    "superblock": false,
00:22:53.786    "num_base_bdevs": 3,
00:22:53.786    "num_base_bdevs_discovered": 2,
00:22:53.786    "num_base_bdevs_operational": 2,
00:22:53.786    "base_bdevs_list": [
00:22:53.786      {
00:22:53.786        "name": null,
00:22:53.786        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:53.786        "is_configured": false,
00:22:53.786        "data_offset": 0,
00:22:53.786        "data_size": 65536
00:22:53.786      },
00:22:53.786      {
00:22:53.786        "name": "BaseBdev2",
00:22:53.786        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:53.786        "is_configured": true,
00:22:53.786        "data_offset": 0,
00:22:53.786        "data_size": 65536
00:22:53.786      },
00:22:53.786      {
00:22:53.786        "name": "BaseBdev3",
00:22:53.786        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:53.786        "is_configured": true,
00:22:53.786        "data_offset": 0,
00:22:53.786        "data_size": 65536
00:22:53.786      }
00:22:53.786    ]
00:22:53.786  }'
00:22:53.786   23:56:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:53.786   23:56:24	-- common/autotest_common.sh@10 -- # set +x
00:22:54.353   23:56:24	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:54.353   23:56:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:54.353   23:56:24	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:54.353   23:56:24	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:54.353   23:56:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:54.353    23:56:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:54.353    23:56:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:54.611   23:56:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:54.611    "name": "raid_bdev1",
00:22:54.611    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:54.611    "strip_size_kb": 64,
00:22:54.611    "state": "online",
00:22:54.611    "raid_level": "raid5f",
00:22:54.611    "superblock": false,
00:22:54.611    "num_base_bdevs": 3,
00:22:54.611    "num_base_bdevs_discovered": 2,
00:22:54.611    "num_base_bdevs_operational": 2,
00:22:54.611    "base_bdevs_list": [
00:22:54.611      {
00:22:54.611        "name": null,
00:22:54.611        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:54.611        "is_configured": false,
00:22:54.611        "data_offset": 0,
00:22:54.611        "data_size": 65536
00:22:54.611      },
00:22:54.611      {
00:22:54.611        "name": "BaseBdev2",
00:22:54.611        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:54.611        "is_configured": true,
00:22:54.611        "data_offset": 0,
00:22:54.611        "data_size": 65536
00:22:54.611      },
00:22:54.611      {
00:22:54.611        "name": "BaseBdev3",
00:22:54.611        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:54.611        "is_configured": true,
00:22:54.611        "data_offset": 0,
00:22:54.611        "data_size": 65536
00:22:54.611      }
00:22:54.611    ]
00:22:54.611  }'
00:22:54.611    23:56:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:54.611   23:56:25	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:54.611    23:56:25	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:54.611   23:56:25	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:54.611   23:56:25	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:54.869  [2024-12-13 23:56:25.426062] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:54.869  [2024-12-13 23:56:25.426262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:54.869  [2024-12-13 23:56:25.436797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0
00:22:54.869  [2024-12-13 23:56:25.442754] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:54.869   23:56:25	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:22:55.805   23:56:26	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:55.805   23:56:26	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:55.805   23:56:26	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:55.805   23:56:26	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:55.805   23:56:26	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:55.805    23:56:26	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:55.805    23:56:26	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:56.064   23:56:26	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:56.064    "name": "raid_bdev1",
00:22:56.064    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:56.064    "strip_size_kb": 64,
00:22:56.064    "state": "online",
00:22:56.064    "raid_level": "raid5f",
00:22:56.064    "superblock": false,
00:22:56.064    "num_base_bdevs": 3,
00:22:56.064    "num_base_bdevs_discovered": 3,
00:22:56.064    "num_base_bdevs_operational": 3,
00:22:56.064    "process": {
00:22:56.064      "type": "rebuild",
00:22:56.064      "target": "spare",
00:22:56.064      "progress": {
00:22:56.064        "blocks": 24576,
00:22:56.064        "percent": 18
00:22:56.064      }
00:22:56.064    },
00:22:56.064    "base_bdevs_list": [
00:22:56.064      {
00:22:56.064        "name": "spare",
00:22:56.064        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:22:56.064        "is_configured": true,
00:22:56.064        "data_offset": 0,
00:22:56.064        "data_size": 65536
00:22:56.064      },
00:22:56.064      {
00:22:56.064        "name": "BaseBdev2",
00:22:56.064        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:56.064        "is_configured": true,
00:22:56.064        "data_offset": 0,
00:22:56.064        "data_size": 65536
00:22:56.064      },
00:22:56.064      {
00:22:56.064        "name": "BaseBdev3",
00:22:56.064        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:56.064        "is_configured": true,
00:22:56.064        "data_offset": 0,
00:22:56.064        "data_size": 65536
00:22:56.064      }
00:22:56.064    ]
00:22:56.064  }'
00:22:56.064    23:56:26	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:56.064   23:56:26	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:56.064    23:56:26	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@657 -- # local timeout=597
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:56.065   23:56:26	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:56.065    23:56:26	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:56.065    23:56:26	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:56.323   23:56:27	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:56.324    "name": "raid_bdev1",
00:22:56.324    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:56.324    "strip_size_kb": 64,
00:22:56.324    "state": "online",
00:22:56.324    "raid_level": "raid5f",
00:22:56.324    "superblock": false,
00:22:56.324    "num_base_bdevs": 3,
00:22:56.324    "num_base_bdevs_discovered": 3,
00:22:56.324    "num_base_bdevs_operational": 3,
00:22:56.324    "process": {
00:22:56.324      "type": "rebuild",
00:22:56.324      "target": "spare",
00:22:56.324      "progress": {
00:22:56.324        "blocks": 30720,
00:22:56.324        "percent": 23
00:22:56.324      }
00:22:56.324    },
00:22:56.324    "base_bdevs_list": [
00:22:56.324      {
00:22:56.324        "name": "spare",
00:22:56.324        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:22:56.324        "is_configured": true,
00:22:56.324        "data_offset": 0,
00:22:56.324        "data_size": 65536
00:22:56.324      },
00:22:56.324      {
00:22:56.324        "name": "BaseBdev2",
00:22:56.324        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:56.324        "is_configured": true,
00:22:56.324        "data_offset": 0,
00:22:56.324        "data_size": 65536
00:22:56.324      },
00:22:56.324      {
00:22:56.324        "name": "BaseBdev3",
00:22:56.324        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:56.324        "is_configured": true,
00:22:56.324        "data_offset": 0,
00:22:56.324        "data_size": 65536
00:22:56.324      }
00:22:56.324    ]
00:22:56.324  }'
00:22:56.324    23:56:27	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:56.582   23:56:27	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:56.582    23:56:27	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:56.582   23:56:27	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:56.582   23:56:27	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:57.518   23:56:28	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:57.518    23:56:28	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:57.518    23:56:28	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:57.777   23:56:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:57.777    "name": "raid_bdev1",
00:22:57.777    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:57.777    "strip_size_kb": 64,
00:22:57.777    "state": "online",
00:22:57.777    "raid_level": "raid5f",
00:22:57.777    "superblock": false,
00:22:57.777    "num_base_bdevs": 3,
00:22:57.777    "num_base_bdevs_discovered": 3,
00:22:57.777    "num_base_bdevs_operational": 3,
00:22:57.777    "process": {
00:22:57.777      "type": "rebuild",
00:22:57.777      "target": "spare",
00:22:57.777      "progress": {
00:22:57.777        "blocks": 57344,
00:22:57.777        "percent": 43
00:22:57.777      }
00:22:57.777    },
00:22:57.777    "base_bdevs_list": [
00:22:57.777      {
00:22:57.777        "name": "spare",
00:22:57.777        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:22:57.777        "is_configured": true,
00:22:57.777        "data_offset": 0,
00:22:57.777        "data_size": 65536
00:22:57.777      },
00:22:57.777      {
00:22:57.777        "name": "BaseBdev2",
00:22:57.777        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:57.777        "is_configured": true,
00:22:57.777        "data_offset": 0,
00:22:57.777        "data_size": 65536
00:22:57.777      },
00:22:57.777      {
00:22:57.777        "name": "BaseBdev3",
00:22:57.777        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:57.777        "is_configured": true,
00:22:57.777        "data_offset": 0,
00:22:57.777        "data_size": 65536
00:22:57.777      }
00:22:57.777    ]
00:22:57.777  }'
00:22:57.777    23:56:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:57.777   23:56:28	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:57.777    23:56:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:57.777   23:56:28	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:57.777   23:56:28	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:59.153    23:56:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:59.153    23:56:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:59.153   23:56:29	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:59.153    "name": "raid_bdev1",
00:22:59.153    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:22:59.153    "strip_size_kb": 64,
00:22:59.153    "state": "online",
00:22:59.153    "raid_level": "raid5f",
00:22:59.153    "superblock": false,
00:22:59.153    "num_base_bdevs": 3,
00:22:59.153    "num_base_bdevs_discovered": 3,
00:22:59.153    "num_base_bdevs_operational": 3,
00:22:59.153    "process": {
00:22:59.153      "type": "rebuild",
00:22:59.153      "target": "spare",
00:22:59.153      "progress": {
00:22:59.153        "blocks": 86016,
00:22:59.153        "percent": 65
00:22:59.153      }
00:22:59.153    },
00:22:59.153    "base_bdevs_list": [
00:22:59.153      {
00:22:59.153        "name": "spare",
00:22:59.153        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:22:59.153        "is_configured": true,
00:22:59.153        "data_offset": 0,
00:22:59.153        "data_size": 65536
00:22:59.153      },
00:22:59.153      {
00:22:59.153        "name": "BaseBdev2",
00:22:59.154        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:22:59.154        "is_configured": true,
00:22:59.154        "data_offset": 0,
00:22:59.154        "data_size": 65536
00:22:59.154      },
00:22:59.154      {
00:22:59.154        "name": "BaseBdev3",
00:22:59.154        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:22:59.154        "is_configured": true,
00:22:59.154        "data_offset": 0,
00:22:59.154        "data_size": 65536
00:22:59.154      }
00:22:59.154    ]
00:22:59.154  }'
00:22:59.154    23:56:29	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:59.154   23:56:29	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:59.154    23:56:29	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:59.154   23:56:29	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:59.154   23:56:29	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:00.090   23:56:30	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:00.349    23:56:30	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:00.349    23:56:30	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:00.349   23:56:31	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:00.349    "name": "raid_bdev1",
00:23:00.349    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:23:00.349    "strip_size_kb": 64,
00:23:00.349    "state": "online",
00:23:00.349    "raid_level": "raid5f",
00:23:00.349    "superblock": false,
00:23:00.349    "num_base_bdevs": 3,
00:23:00.349    "num_base_bdevs_discovered": 3,
00:23:00.349    "num_base_bdevs_operational": 3,
00:23:00.349    "process": {
00:23:00.349      "type": "rebuild",
00:23:00.349      "target": "spare",
00:23:00.349      "progress": {
00:23:00.349        "blocks": 112640,
00:23:00.349        "percent": 85
00:23:00.349      }
00:23:00.349    },
00:23:00.349    "base_bdevs_list": [
00:23:00.349      {
00:23:00.349        "name": "spare",
00:23:00.349        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:23:00.349        "is_configured": true,
00:23:00.349        "data_offset": 0,
00:23:00.349        "data_size": 65536
00:23:00.349      },
00:23:00.349      {
00:23:00.349        "name": "BaseBdev2",
00:23:00.349        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:23:00.349        "is_configured": true,
00:23:00.349        "data_offset": 0,
00:23:00.349        "data_size": 65536
00:23:00.349      },
00:23:00.349      {
00:23:00.349        "name": "BaseBdev3",
00:23:00.349        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:23:00.349        "is_configured": true,
00:23:00.349        "data_offset": 0,
00:23:00.349        "data_size": 65536
00:23:00.349      }
00:23:00.349    ]
00:23:00.349  }'
00:23:00.349    23:56:31	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:00.608   23:56:31	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:00.608    23:56:31	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:00.608   23:56:31	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:00.608   23:56:31	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:01.190  [2024-12-13 23:56:31.895278] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:23:01.190  [2024-12-13 23:56:31.895486] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:23:01.190  [2024-12-13 23:56:31.895707] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:01.503   23:56:32	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:01.503    23:56:32	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:01.503    23:56:32	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:01.774   23:56:32	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:01.774    "name": "raid_bdev1",
00:23:01.774    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:23:01.774    "strip_size_kb": 64,
00:23:01.774    "state": "online",
00:23:01.774    "raid_level": "raid5f",
00:23:01.774    "superblock": false,
00:23:01.774    "num_base_bdevs": 3,
00:23:01.774    "num_base_bdevs_discovered": 3,
00:23:01.774    "num_base_bdevs_operational": 3,
00:23:01.774    "base_bdevs_list": [
00:23:01.774      {
00:23:01.774        "name": "spare",
00:23:01.774        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:23:01.775        "is_configured": true,
00:23:01.775        "data_offset": 0,
00:23:01.775        "data_size": 65536
00:23:01.775      },
00:23:01.775      {
00:23:01.775        "name": "BaseBdev2",
00:23:01.775        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:23:01.775        "is_configured": true,
00:23:01.775        "data_offset": 0,
00:23:01.775        "data_size": 65536
00:23:01.775      },
00:23:01.775      {
00:23:01.775        "name": "BaseBdev3",
00:23:01.775        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:23:01.775        "is_configured": true,
00:23:01.775        "data_offset": 0,
00:23:01.775        "data_size": 65536
00:23:01.775      }
00:23:01.775    ]
00:23:01.775  }'
00:23:01.775    23:56:32	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:23:01.775    23:56:32	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@660 -- # break
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@185 -- # local target=none
00:23:01.775   23:56:32	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:02.034    23:56:32	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:02.034    23:56:32	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:02.034   23:56:32	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:02.034    "name": "raid_bdev1",
00:23:02.034    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:23:02.034    "strip_size_kb": 64,
00:23:02.034    "state": "online",
00:23:02.034    "raid_level": "raid5f",
00:23:02.034    "superblock": false,
00:23:02.034    "num_base_bdevs": 3,
00:23:02.034    "num_base_bdevs_discovered": 3,
00:23:02.034    "num_base_bdevs_operational": 3,
00:23:02.034    "base_bdevs_list": [
00:23:02.034      {
00:23:02.034        "name": "spare",
00:23:02.034        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:23:02.034        "is_configured": true,
00:23:02.034        "data_offset": 0,
00:23:02.034        "data_size": 65536
00:23:02.034      },
00:23:02.034      {
00:23:02.034        "name": "BaseBdev2",
00:23:02.034        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:23:02.034        "is_configured": true,
00:23:02.034        "data_offset": 0,
00:23:02.034        "data_size": 65536
00:23:02.034      },
00:23:02.034      {
00:23:02.034        "name": "BaseBdev3",
00:23:02.034        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:23:02.034        "is_configured": true,
00:23:02.034        "data_offset": 0,
00:23:02.034        "data_size": 65536
00:23:02.034      }
00:23:02.034    ]
00:23:02.034  }'
00:23:02.034    23:56:32	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:23:02.292    23:56:32	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:02.292   23:56:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:02.292    23:56:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:02.292    23:56:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:02.551   23:56:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:02.551    "name": "raid_bdev1",
00:23:02.551    "uuid": "8144d71e-934c-4cb9-bd8e-af87c290c565",
00:23:02.551    "strip_size_kb": 64,
00:23:02.551    "state": "online",
00:23:02.551    "raid_level": "raid5f",
00:23:02.551    "superblock": false,
00:23:02.551    "num_base_bdevs": 3,
00:23:02.551    "num_base_bdevs_discovered": 3,
00:23:02.551    "num_base_bdevs_operational": 3,
00:23:02.551    "base_bdevs_list": [
00:23:02.551      {
00:23:02.551        "name": "spare",
00:23:02.551        "uuid": "aad59945-0b90-5726-9d82-38b446dbeb86",
00:23:02.551        "is_configured": true,
00:23:02.551        "data_offset": 0,
00:23:02.551        "data_size": 65536
00:23:02.551      },
00:23:02.552      {
00:23:02.552        "name": "BaseBdev2",
00:23:02.552        "uuid": "7114a20a-4815-484f-bcc2-08d081d38c8f",
00:23:02.552        "is_configured": true,
00:23:02.552        "data_offset": 0,
00:23:02.552        "data_size": 65536
00:23:02.552      },
00:23:02.552      {
00:23:02.552        "name": "BaseBdev3",
00:23:02.552        "uuid": "217510dd-f07e-4948-b742-eea658e8338c",
00:23:02.552        "is_configured": true,
00:23:02.552        "data_offset": 0,
00:23:02.552        "data_size": 65536
00:23:02.552      }
00:23:02.552    ]
00:23:02.552  }'
00:23:02.552   23:56:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:02.552   23:56:33	-- common/autotest_common.sh@10 -- # set +x
00:23:03.119   23:56:33	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:23:03.378  [2024-12-13 23:56:33.930728] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:03.378  [2024-12-13 23:56:33.930883] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:03.378  [2024-12-13 23:56:33.931064] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:03.378  [2024-12-13 23:56:33.931268] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:03.378  [2024-12-13 23:56:33.931425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline
00:23:03.378    23:56:33	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:03.378    23:56:33	-- bdev/bdev_raid.sh@671 -- # jq length
00:23:03.637   23:56:34	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:23:03.637   23:56:34	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:23:03.637   23:56:34	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@12 -- # local i
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:03.637   23:56:34	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:23:03.895  /dev/nbd0
00:23:03.895    23:56:34	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:23:03.895   23:56:34	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:23:03.895   23:56:34	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:23:03.895   23:56:34	-- common/autotest_common.sh@867 -- # local i
00:23:03.895   23:56:34	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:03.895   23:56:34	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:03.895   23:56:34	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:23:03.895   23:56:34	-- common/autotest_common.sh@871 -- # break
00:23:03.895   23:56:34	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:03.895   23:56:34	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:03.895   23:56:34	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:03.895  1+0 records in
00:23:03.895  1+0 records out
00:23:03.895  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585893 s, 7.0 MB/s
00:23:03.895    23:56:34	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:03.895   23:56:34	-- common/autotest_common.sh@884 -- # size=4096
00:23:03.895   23:56:34	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:03.895   23:56:34	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:03.895   23:56:34	-- common/autotest_common.sh@887 -- # return 0
00:23:03.895   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:03.895   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:03.895   23:56:34	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:23:04.155  /dev/nbd1
00:23:04.155    23:56:34	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:23:04.155   23:56:34	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:23:04.155   23:56:34	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:23:04.155   23:56:34	-- common/autotest_common.sh@867 -- # local i
00:23:04.155   23:56:34	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:04.155   23:56:34	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:04.155   23:56:34	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:23:04.155   23:56:34	-- common/autotest_common.sh@871 -- # break
00:23:04.155   23:56:34	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:04.155   23:56:34	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:04.155   23:56:34	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:04.155  1+0 records in
00:23:04.155  1+0 records out
00:23:04.155  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551395 s, 7.4 MB/s
00:23:04.155    23:56:34	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:04.155   23:56:34	-- common/autotest_common.sh@884 -- # size=4096
00:23:04.155   23:56:34	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:04.155   23:56:34	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:04.155   23:56:34	-- common/autotest_common.sh@887 -- # return 0
00:23:04.155   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:04.155   23:56:34	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:04.155   23:56:34	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:23:04.414   23:56:34	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@51 -- # local i
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:04.414   23:56:34	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:23:04.672    23:56:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@41 -- # break
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@45 -- # return 0
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:04.672   23:56:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:23:04.930    23:56:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@41 -- # break
00:23:04.930   23:56:35	-- bdev/nbd_common.sh@45 -- # return 0
00:23:04.930   23:56:35	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:23:04.930   23:56:35	-- bdev/bdev_raid.sh@709 -- # killprocess 128140
00:23:04.930   23:56:35	-- common/autotest_common.sh@936 -- # '[' -z 128140 ']'
00:23:04.930   23:56:35	-- common/autotest_common.sh@940 -- # kill -0 128140
00:23:04.930    23:56:35	-- common/autotest_common.sh@941 -- # uname
00:23:04.930   23:56:35	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:04.930    23:56:35	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128140
00:23:04.930   23:56:35	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:04.930   23:56:35	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:04.930   23:56:35	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 128140'
00:23:04.930  killing process with pid 128140
00:23:04.930   23:56:35	-- common/autotest_common.sh@955 -- # kill 128140
00:23:04.930  Received shutdown signal, test time was about 60.000000 seconds
00:23:04.930  
00:23:04.930                                                                                                  Latency(us)
00:23:04.930  
[2024-12-13T23:56:35.662Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:04.930  
[2024-12-13T23:56:35.662Z]  ===================================================================================================================
00:23:04.930  
[2024-12-13T23:56:35.662Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:04.930   23:56:35	-- common/autotest_common.sh@960 -- # wait 128140
00:23:04.930  [2024-12-13 23:56:35.524005] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:05.189  [2024-12-13 23:56:35.781685] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:06.126   23:56:36	-- bdev/bdev_raid.sh@711 -- # return 0
00:23:06.126  
00:23:06.126  real	0m20.398s
00:23:06.126  user	0m30.521s
00:23:06.126  sys	0m2.461s
00:23:06.126   23:56:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:06.126   23:56:36	-- common/autotest_common.sh@10 -- # set +x
00:23:06.126  ************************************
00:23:06.126  END TEST raid5f_rebuild_test
00:23:06.126  ************************************
00:23:06.126   23:56:36	-- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false
00:23:06.126   23:56:36	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:23:06.126   23:56:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:06.126   23:56:36	-- common/autotest_common.sh@10 -- # set +x
00:23:06.126  ************************************
00:23:06.126  START TEST raid5f_rebuild_test_sb
00:23:06.126  ************************************
00:23:06.126   23:56:36	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false
00:23:06.126   23:56:36	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:23:06.126   23:56:36	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:06.127    23:56:36	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@544 -- # raid_pid=128686
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@545 -- # waitforlisten 128686 /var/tmp/spdk-raid.sock
00:23:06.127   23:56:36	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:23:06.127   23:56:36	-- common/autotest_common.sh@829 -- # '[' -z 128686 ']'
00:23:06.127   23:56:36	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:06.127   23:56:36	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:06.127  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:06.127   23:56:36	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:06.127   23:56:36	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:06.127   23:56:36	-- common/autotest_common.sh@10 -- # set +x
00:23:06.127  [2024-12-13 23:56:36.841210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:06.127  [2024-12-13 23:56:36.841685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128686 ]
00:23:06.127  I/O size of 3145728 is greater than zero copy threshold (65536).
00:23:06.127  Zero copy mechanism will not be used.
00:23:06.386  [2024-12-13 23:56:37.009410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:06.645  [2024-12-13 23:56:37.178040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:06.645  [2024-12-13 23:56:37.346485] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:07.213   23:56:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:07.213   23:56:37	-- common/autotest_common.sh@862 -- # return 0
00:23:07.213   23:56:37	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:07.213   23:56:37	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:23:07.213   23:56:37	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:23:07.472  BaseBdev1_malloc
00:23:07.472   23:56:37	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:23:07.472  [2024-12-13 23:56:38.153489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:23:07.472  [2024-12-13 23:56:38.153818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:07.472  [2024-12-13 23:56:38.153890] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:23:07.472  [2024-12-13 23:56:38.154162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:07.472  [2024-12-13 23:56:38.156425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:07.472  [2024-12-13 23:56:38.156590] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:23:07.472  BaseBdev1
00:23:07.472   23:56:38	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:07.472   23:56:38	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:23:07.472   23:56:38	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:23:07.731  BaseBdev2_malloc
00:23:07.990   23:56:38	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:23:07.990  [2024-12-13 23:56:38.690752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:23:07.990  [2024-12-13 23:56:38.690993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:07.990  [2024-12-13 23:56:38.691141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:23:07.990  [2024-12-13 23:56:38.691290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:07.990  [2024-12-13 23:56:38.693538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:07.990  [2024-12-13 23:56:38.693764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:23:07.990  BaseBdev2
00:23:07.990   23:56:38	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:07.990   23:56:38	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:23:07.990   23:56:38	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:23:08.249  BaseBdev3_malloc
00:23:08.249   23:56:38	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:23:08.507  [2024-12-13 23:56:39.160175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:23:08.507  [2024-12-13 23:56:39.160404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:08.507  [2024-12-13 23:56:39.160483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:23:08.507  [2024-12-13 23:56:39.160746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:08.507  [2024-12-13 23:56:39.162956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:08.507  [2024-12-13 23:56:39.163135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:23:08.507  BaseBdev3
00:23:08.507   23:56:39	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:23:08.766  spare_malloc
00:23:08.766   23:56:39	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:23:09.025  spare_delay
00:23:09.025   23:56:39	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:23:09.284  [2024-12-13 23:56:39.789467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:23:09.284  [2024-12-13 23:56:39.789716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:09.284  [2024-12-13 23:56:39.789865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:23:09.284  [2024-12-13 23:56:39.790038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:09.284  [2024-12-13 23:56:39.792276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:09.284  [2024-12-13 23:56:39.792440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:23:09.284  spare
00:23:09.284   23:56:39	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1
00:23:09.284  [2024-12-13 23:56:40.017665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:09.542  [2024-12-13 23:56:40.019981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:09.542  [2024-12-13 23:56:40.020220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:09.542  [2024-12-13 23:56:40.020516] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:23:09.542  [2024-12-13 23:56:40.020567] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:23:09.542  [2024-12-13 23:56:40.020797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:23:09.542  [2024-12-13 23:56:40.025252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:23:09.542  [2024-12-13 23:56:40.025381] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:23:09.542  [2024-12-13 23:56:40.025644] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:09.542   23:56:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:09.543   23:56:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:09.543   23:56:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:09.543   23:56:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:09.543    23:56:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:09.543    23:56:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:09.543   23:56:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:09.543    "name": "raid_bdev1",
00:23:09.543    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:09.543    "strip_size_kb": 64,
00:23:09.543    "state": "online",
00:23:09.543    "raid_level": "raid5f",
00:23:09.543    "superblock": true,
00:23:09.543    "num_base_bdevs": 3,
00:23:09.543    "num_base_bdevs_discovered": 3,
00:23:09.543    "num_base_bdevs_operational": 3,
00:23:09.543    "base_bdevs_list": [
00:23:09.543      {
00:23:09.543        "name": "BaseBdev1",
00:23:09.543        "uuid": "a0066759-5218-5361-92c1-321b357e0651",
00:23:09.543        "is_configured": true,
00:23:09.543        "data_offset": 2048,
00:23:09.543        "data_size": 63488
00:23:09.543      },
00:23:09.543      {
00:23:09.543        "name": "BaseBdev2",
00:23:09.543        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:09.543        "is_configured": true,
00:23:09.543        "data_offset": 2048,
00:23:09.543        "data_size": 63488
00:23:09.543      },
00:23:09.543      {
00:23:09.543        "name": "BaseBdev3",
00:23:09.543        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:09.543        "is_configured": true,
00:23:09.543        "data_offset": 2048,
00:23:09.543        "data_size": 63488
00:23:09.543      }
00:23:09.543    ]
00:23:09.543  }'
00:23:09.543   23:56:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:09.543   23:56:40	-- common/autotest_common.sh@10 -- # set +x
00:23:10.109    23:56:40	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:23:10.109    23:56:40	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:23:10.367  [2024-12-13 23:56:41.010631] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:23:10.367   23:56:41	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976
00:23:10.367    23:56:41	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:10.367    23:56:41	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:23:10.626   23:56:41	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:23:10.626   23:56:41	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:23:10.626   23:56:41	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:23:10.626   23:56:41	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@12 -- # local i
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:23:10.626   23:56:41	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:23:10.885  [2024-12-13 23:56:41.526820] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:23:10.885  /dev/nbd0
00:23:10.885    23:56:41	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:23:10.885   23:56:41	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:23:10.885   23:56:41	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:23:10.885   23:56:41	-- common/autotest_common.sh@867 -- # local i
00:23:10.885   23:56:41	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:10.885   23:56:41	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:10.885   23:56:41	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:23:10.885   23:56:41	-- common/autotest_common.sh@871 -- # break
00:23:10.885   23:56:41	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:10.885   23:56:41	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:10.885   23:56:41	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:10.885  1+0 records in
00:23:10.885  1+0 records out
00:23:10.885  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261408 s, 15.7 MB/s
00:23:10.885    23:56:41	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:10.885   23:56:41	-- common/autotest_common.sh@884 -- # size=4096
00:23:10.885   23:56:41	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:10.885   23:56:41	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:10.885   23:56:41	-- common/autotest_common.sh@887 -- # return 0
00:23:10.885   23:56:41	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:10.885   23:56:41	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:23:10.885   23:56:41	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:23:10.885   23:56:41	-- bdev/bdev_raid.sh@581 -- # write_unit_size=256
00:23:10.885   23:56:41	-- bdev/bdev_raid.sh@582 -- # echo 128
00:23:10.885   23:56:41	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct
00:23:11.452  496+0 records in
00:23:11.452  496+0 records out
00:23:11.452  65011712 bytes (65 MB, 62 MiB) copied, 0.406153 s, 160 MB/s
00:23:11.452   23:56:41	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@51 -- # local i
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:11.452   23:56:41	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:23:11.711    23:56:42	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@41 -- # break
00:23:11.711   23:56:42	-- bdev/nbd_common.sh@45 -- # return 0
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:23:11.711  [2024-12-13 23:56:42.255902] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:11.711  [2024-12-13 23:56:42.409435] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:11.711   23:56:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:11.711    23:56:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:11.711    23:56:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:11.970   23:56:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:11.970    "name": "raid_bdev1",
00:23:11.970    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:11.970    "strip_size_kb": 64,
00:23:11.970    "state": "online",
00:23:11.970    "raid_level": "raid5f",
00:23:11.970    "superblock": true,
00:23:11.970    "num_base_bdevs": 3,
00:23:11.970    "num_base_bdevs_discovered": 2,
00:23:11.970    "num_base_bdevs_operational": 2,
00:23:11.970    "base_bdevs_list": [
00:23:11.970      {
00:23:11.970        "name": null,
00:23:11.970        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:11.970        "is_configured": false,
00:23:11.970        "data_offset": 2048,
00:23:11.970        "data_size": 63488
00:23:11.970      },
00:23:11.970      {
00:23:11.970        "name": "BaseBdev2",
00:23:11.970        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:11.970        "is_configured": true,
00:23:11.970        "data_offset": 2048,
00:23:11.970        "data_size": 63488
00:23:11.970      },
00:23:11.970      {
00:23:11.970        "name": "BaseBdev3",
00:23:11.970        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:11.970        "is_configured": true,
00:23:11.970        "data_offset": 2048,
00:23:11.970        "data_size": 63488
00:23:11.970      }
00:23:11.970    ]
00:23:11.970  }'
00:23:11.970   23:56:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:11.970   23:56:42	-- common/autotest_common.sh@10 -- # set +x
00:23:12.540   23:56:43	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:23:12.799  [2024-12-13 23:56:43.381670] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:23:12.799  [2024-12-13 23:56:43.381714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:23:12.799  [2024-12-13 23:56:43.392745] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70
00:23:12.799  [2024-12-13 23:56:43.398422] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:23:12.799   23:56:43	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:23:13.735   23:56:44	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:13.735   23:56:44	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:13.735   23:56:44	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:13.735   23:56:44	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:13.735   23:56:44	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:13.735    23:56:44	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:13.735    23:56:44	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:13.993   23:56:44	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:13.993    "name": "raid_bdev1",
00:23:13.993    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:13.993    "strip_size_kb": 64,
00:23:13.993    "state": "online",
00:23:13.993    "raid_level": "raid5f",
00:23:13.993    "superblock": true,
00:23:13.993    "num_base_bdevs": 3,
00:23:13.993    "num_base_bdevs_discovered": 3,
00:23:13.993    "num_base_bdevs_operational": 3,
00:23:13.993    "process": {
00:23:13.993      "type": "rebuild",
00:23:13.993      "target": "spare",
00:23:13.993      "progress": {
00:23:13.993        "blocks": 24576,
00:23:13.993        "percent": 19
00:23:13.993      }
00:23:13.993    },
00:23:13.993    "base_bdevs_list": [
00:23:13.993      {
00:23:13.993        "name": "spare",
00:23:13.993        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:13.993        "is_configured": true,
00:23:13.993        "data_offset": 2048,
00:23:13.993        "data_size": 63488
00:23:13.993      },
00:23:13.993      {
00:23:13.993        "name": "BaseBdev2",
00:23:13.993        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:13.993        "is_configured": true,
00:23:13.993        "data_offset": 2048,
00:23:13.993        "data_size": 63488
00:23:13.993      },
00:23:13.993      {
00:23:13.993        "name": "BaseBdev3",
00:23:13.993        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:13.993        "is_configured": true,
00:23:13.993        "data_offset": 2048,
00:23:13.993        "data_size": 63488
00:23:13.993      }
00:23:13.993    ]
00:23:13.993  }'
00:23:13.993    23:56:44	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:13.993   23:56:44	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:13.993    23:56:44	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:14.252   23:56:44	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:14.252   23:56:44	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:23:14.252  [2024-12-13 23:56:44.979540] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:23:14.511  [2024-12-13 23:56:45.011471] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:23:14.511  [2024-12-13 23:56:45.011583] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:14.511   23:56:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:14.511    23:56:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:14.511    23:56:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:14.769   23:56:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:14.769    "name": "raid_bdev1",
00:23:14.769    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:14.769    "strip_size_kb": 64,
00:23:14.769    "state": "online",
00:23:14.769    "raid_level": "raid5f",
00:23:14.769    "superblock": true,
00:23:14.769    "num_base_bdevs": 3,
00:23:14.769    "num_base_bdevs_discovered": 2,
00:23:14.769    "num_base_bdevs_operational": 2,
00:23:14.769    "base_bdevs_list": [
00:23:14.769      {
00:23:14.769        "name": null,
00:23:14.769        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:14.769        "is_configured": false,
00:23:14.769        "data_offset": 2048,
00:23:14.769        "data_size": 63488
00:23:14.769      },
00:23:14.769      {
00:23:14.769        "name": "BaseBdev2",
00:23:14.770        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:14.770        "is_configured": true,
00:23:14.770        "data_offset": 2048,
00:23:14.770        "data_size": 63488
00:23:14.770      },
00:23:14.770      {
00:23:14.770        "name": "BaseBdev3",
00:23:14.770        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:14.770        "is_configured": true,
00:23:14.770        "data_offset": 2048,
00:23:14.770        "data_size": 63488
00:23:14.770      }
00:23:14.770    ]
00:23:14.770  }'
00:23:14.770   23:56:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:14.770   23:56:45	-- common/autotest_common.sh@10 -- # set +x
00:23:15.337   23:56:45	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:23:15.337   23:56:45	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:15.337   23:56:45	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:23:15.337   23:56:45	-- bdev/bdev_raid.sh@185 -- # local target=none
00:23:15.337   23:56:45	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:15.337    23:56:45	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:15.337    23:56:45	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:15.596   23:56:46	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:15.596    "name": "raid_bdev1",
00:23:15.596    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:15.596    "strip_size_kb": 64,
00:23:15.596    "state": "online",
00:23:15.596    "raid_level": "raid5f",
00:23:15.596    "superblock": true,
00:23:15.596    "num_base_bdevs": 3,
00:23:15.596    "num_base_bdevs_discovered": 2,
00:23:15.596    "num_base_bdevs_operational": 2,
00:23:15.596    "base_bdevs_list": [
00:23:15.596      {
00:23:15.596        "name": null,
00:23:15.596        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:15.596        "is_configured": false,
00:23:15.596        "data_offset": 2048,
00:23:15.596        "data_size": 63488
00:23:15.596      },
00:23:15.596      {
00:23:15.596        "name": "BaseBdev2",
00:23:15.596        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:15.596        "is_configured": true,
00:23:15.596        "data_offset": 2048,
00:23:15.596        "data_size": 63488
00:23:15.596      },
00:23:15.596      {
00:23:15.596        "name": "BaseBdev3",
00:23:15.596        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:15.596        "is_configured": true,
00:23:15.596        "data_offset": 2048,
00:23:15.596        "data_size": 63488
00:23:15.596      }
00:23:15.596    ]
00:23:15.596  }'
00:23:15.596    23:56:46	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:15.596   23:56:46	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:23:15.596    23:56:46	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:15.596   23:56:46	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:23:15.596   23:56:46	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:23:15.854  [2024-12-13 23:56:46.427582] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:23:15.854  [2024-12-13 23:56:46.427632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:23:15.854  [2024-12-13 23:56:46.438480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10
00:23:15.854  [2024-12-13 23:56:46.444051] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:23:15.854   23:56:46	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:23:16.791   23:56:47	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:16.791   23:56:47	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:16.791   23:56:47	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:16.791   23:56:47	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:16.791   23:56:47	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:16.791    23:56:47	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:16.791    23:56:47	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:17.050   23:56:47	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:17.050    "name": "raid_bdev1",
00:23:17.050    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:17.050    "strip_size_kb": 64,
00:23:17.050    "state": "online",
00:23:17.050    "raid_level": "raid5f",
00:23:17.050    "superblock": true,
00:23:17.050    "num_base_bdevs": 3,
00:23:17.050    "num_base_bdevs_discovered": 3,
00:23:17.050    "num_base_bdevs_operational": 3,
00:23:17.050    "process": {
00:23:17.050      "type": "rebuild",
00:23:17.050      "target": "spare",
00:23:17.050      "progress": {
00:23:17.050        "blocks": 24576,
00:23:17.050        "percent": 19
00:23:17.050      }
00:23:17.050    },
00:23:17.050    "base_bdevs_list": [
00:23:17.050      {
00:23:17.050        "name": "spare",
00:23:17.050        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:17.050        "is_configured": true,
00:23:17.050        "data_offset": 2048,
00:23:17.050        "data_size": 63488
00:23:17.050      },
00:23:17.050      {
00:23:17.050        "name": "BaseBdev2",
00:23:17.050        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:17.050        "is_configured": true,
00:23:17.050        "data_offset": 2048,
00:23:17.050        "data_size": 63488
00:23:17.050      },
00:23:17.050      {
00:23:17.050        "name": "BaseBdev3",
00:23:17.050        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:17.050        "is_configured": true,
00:23:17.050        "data_offset": 2048,
00:23:17.050        "data_size": 63488
00:23:17.050      }
00:23:17.050    ]
00:23:17.050  }'
00:23:17.050    23:56:47	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:17.050   23:56:47	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:17.050    23:56:47	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:23:17.308  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@657 -- # local timeout=618
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:17.308    23:56:47	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:17.308    23:56:47	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:17.308   23:56:47	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:17.308    "name": "raid_bdev1",
00:23:17.308    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:17.308    "strip_size_kb": 64,
00:23:17.308    "state": "online",
00:23:17.308    "raid_level": "raid5f",
00:23:17.308    "superblock": true,
00:23:17.308    "num_base_bdevs": 3,
00:23:17.308    "num_base_bdevs_discovered": 3,
00:23:17.308    "num_base_bdevs_operational": 3,
00:23:17.308    "process": {
00:23:17.308      "type": "rebuild",
00:23:17.308      "target": "spare",
00:23:17.308      "progress": {
00:23:17.308        "blocks": 30720,
00:23:17.308        "percent": 24
00:23:17.308      }
00:23:17.308    },
00:23:17.308    "base_bdevs_list": [
00:23:17.308      {
00:23:17.308        "name": "spare",
00:23:17.308        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:17.309        "is_configured": true,
00:23:17.309        "data_offset": 2048,
00:23:17.309        "data_size": 63488
00:23:17.309      },
00:23:17.309      {
00:23:17.309        "name": "BaseBdev2",
00:23:17.309        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:17.309        "is_configured": true,
00:23:17.309        "data_offset": 2048,
00:23:17.309        "data_size": 63488
00:23:17.309      },
00:23:17.309      {
00:23:17.309        "name": "BaseBdev3",
00:23:17.309        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:17.309        "is_configured": true,
00:23:17.309        "data_offset": 2048,
00:23:17.309        "data_size": 63488
00:23:17.309      }
00:23:17.309    ]
00:23:17.309  }'
00:23:17.309    23:56:47	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:17.567   23:56:48	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:17.567    23:56:48	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:17.567   23:56:48	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:17.567   23:56:48	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:18.503   23:56:49	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:18.503    23:56:49	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:18.503    23:56:49	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:18.762   23:56:49	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:18.762    "name": "raid_bdev1",
00:23:18.762    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:18.762    "strip_size_kb": 64,
00:23:18.762    "state": "online",
00:23:18.762    "raid_level": "raid5f",
00:23:18.762    "superblock": true,
00:23:18.762    "num_base_bdevs": 3,
00:23:18.762    "num_base_bdevs_discovered": 3,
00:23:18.762    "num_base_bdevs_operational": 3,
00:23:18.762    "process": {
00:23:18.762      "type": "rebuild",
00:23:18.762      "target": "spare",
00:23:18.762      "progress": {
00:23:18.762        "blocks": 57344,
00:23:18.762        "percent": 45
00:23:18.762      }
00:23:18.762    },
00:23:18.762    "base_bdevs_list": [
00:23:18.762      {
00:23:18.762        "name": "spare",
00:23:18.762        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:18.762        "is_configured": true,
00:23:18.762        "data_offset": 2048,
00:23:18.762        "data_size": 63488
00:23:18.762      },
00:23:18.762      {
00:23:18.762        "name": "BaseBdev2",
00:23:18.762        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:18.762        "is_configured": true,
00:23:18.762        "data_offset": 2048,
00:23:18.762        "data_size": 63488
00:23:18.762      },
00:23:18.762      {
00:23:18.762        "name": "BaseBdev3",
00:23:18.762        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:18.762        "is_configured": true,
00:23:18.762        "data_offset": 2048,
00:23:18.762        "data_size": 63488
00:23:18.762      }
00:23:18.762    ]
00:23:18.762  }'
00:23:18.762    23:56:49	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:18.762   23:56:49	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:18.762    23:56:49	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:18.762   23:56:49	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:18.762   23:56:49	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:20.138    23:56:50	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:20.138    23:56:50	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:20.138    "name": "raid_bdev1",
00:23:20.138    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:20.138    "strip_size_kb": 64,
00:23:20.138    "state": "online",
00:23:20.138    "raid_level": "raid5f",
00:23:20.138    "superblock": true,
00:23:20.138    "num_base_bdevs": 3,
00:23:20.138    "num_base_bdevs_discovered": 3,
00:23:20.138    "num_base_bdevs_operational": 3,
00:23:20.138    "process": {
00:23:20.138      "type": "rebuild",
00:23:20.138      "target": "spare",
00:23:20.138      "progress": {
00:23:20.138        "blocks": 83968,
00:23:20.138        "percent": 66
00:23:20.138      }
00:23:20.138    },
00:23:20.138    "base_bdevs_list": [
00:23:20.138      {
00:23:20.138        "name": "spare",
00:23:20.138        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:20.138        "is_configured": true,
00:23:20.138        "data_offset": 2048,
00:23:20.138        "data_size": 63488
00:23:20.138      },
00:23:20.138      {
00:23:20.138        "name": "BaseBdev2",
00:23:20.138        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:20.138        "is_configured": true,
00:23:20.138        "data_offset": 2048,
00:23:20.138        "data_size": 63488
00:23:20.138      },
00:23:20.138      {
00:23:20.138        "name": "BaseBdev3",
00:23:20.138        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:20.138        "is_configured": true,
00:23:20.138        "data_offset": 2048,
00:23:20.138        "data_size": 63488
00:23:20.138      }
00:23:20.138    ]
00:23:20.138  }'
00:23:20.138    23:56:50	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:20.138    23:56:50	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:20.138   23:56:50	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:21.074   23:56:51	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:21.074    23:56:51	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:21.074    23:56:51	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:21.332   23:56:51	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:21.332    "name": "raid_bdev1",
00:23:21.332    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:21.332    "strip_size_kb": 64,
00:23:21.332    "state": "online",
00:23:21.332    "raid_level": "raid5f",
00:23:21.332    "superblock": true,
00:23:21.332    "num_base_bdevs": 3,
00:23:21.332    "num_base_bdevs_discovered": 3,
00:23:21.332    "num_base_bdevs_operational": 3,
00:23:21.332    "process": {
00:23:21.332      "type": "rebuild",
00:23:21.332      "target": "spare",
00:23:21.332      "progress": {
00:23:21.332        "blocks": 110592,
00:23:21.332        "percent": 87
00:23:21.332      }
00:23:21.332    },
00:23:21.332    "base_bdevs_list": [
00:23:21.332      {
00:23:21.332        "name": "spare",
00:23:21.332        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:21.332        "is_configured": true,
00:23:21.332        "data_offset": 2048,
00:23:21.332        "data_size": 63488
00:23:21.332      },
00:23:21.332      {
00:23:21.332        "name": "BaseBdev2",
00:23:21.332        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:21.332        "is_configured": true,
00:23:21.332        "data_offset": 2048,
00:23:21.332        "data_size": 63488
00:23:21.332      },
00:23:21.332      {
00:23:21.332        "name": "BaseBdev3",
00:23:21.332        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:21.332        "is_configured": true,
00:23:21.332        "data_offset": 2048,
00:23:21.332        "data_size": 63488
00:23:21.332      }
00:23:21.332    ]
00:23:21.332  }'
00:23:21.332    23:56:51	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:21.332   23:56:52	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:23:21.333    23:56:52	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:21.591   23:56:52	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:23:21.591   23:56:52	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:23:22.158  [2024-12-13 23:56:52.698325] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:23:22.158  [2024-12-13 23:56:52.698392] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:23:22.158  [2024-12-13 23:56:52.698522] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:23:22.417   23:56:53	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:22.417    23:56:53	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:22.417    23:56:53	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:22.676   23:56:53	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:22.676    "name": "raid_bdev1",
00:23:22.676    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:22.676    "strip_size_kb": 64,
00:23:22.676    "state": "online",
00:23:22.676    "raid_level": "raid5f",
00:23:22.676    "superblock": true,
00:23:22.676    "num_base_bdevs": 3,
00:23:22.676    "num_base_bdevs_discovered": 3,
00:23:22.676    "num_base_bdevs_operational": 3,
00:23:22.676    "base_bdevs_list": [
00:23:22.676      {
00:23:22.676        "name": "spare",
00:23:22.676        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:22.676        "is_configured": true,
00:23:22.676        "data_offset": 2048,
00:23:22.676        "data_size": 63488
00:23:22.676      },
00:23:22.676      {
00:23:22.676        "name": "BaseBdev2",
00:23:22.676        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:22.676        "is_configured": true,
00:23:22.676        "data_offset": 2048,
00:23:22.676        "data_size": 63488
00:23:22.676      },
00:23:22.676      {
00:23:22.676        "name": "BaseBdev3",
00:23:22.676        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:22.676        "is_configured": true,
00:23:22.676        "data_offset": 2048,
00:23:22.676        "data_size": 63488
00:23:22.676      }
00:23:22.676    ]
00:23:22.676  }'
00:23:22.676    23:56:53	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:22.676   23:56:53	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:23:22.676    23:56:53	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@660 -- # break
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@185 -- # local target=none
00:23:22.935   23:56:53	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:22.935    23:56:53	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:22.935    23:56:53	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:23.195   23:56:53	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:23.196    "name": "raid_bdev1",
00:23:23.196    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:23.196    "strip_size_kb": 64,
00:23:23.196    "state": "online",
00:23:23.196    "raid_level": "raid5f",
00:23:23.196    "superblock": true,
00:23:23.196    "num_base_bdevs": 3,
00:23:23.196    "num_base_bdevs_discovered": 3,
00:23:23.196    "num_base_bdevs_operational": 3,
00:23:23.196    "base_bdevs_list": [
00:23:23.196      {
00:23:23.196        "name": "spare",
00:23:23.196        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:23.196        "is_configured": true,
00:23:23.196        "data_offset": 2048,
00:23:23.196        "data_size": 63488
00:23:23.196      },
00:23:23.196      {
00:23:23.196        "name": "BaseBdev2",
00:23:23.196        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:23.196        "is_configured": true,
00:23:23.196        "data_offset": 2048,
00:23:23.196        "data_size": 63488
00:23:23.196      },
00:23:23.196      {
00:23:23.196        "name": "BaseBdev3",
00:23:23.196        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:23.196        "is_configured": true,
00:23:23.196        "data_offset": 2048,
00:23:23.196        "data_size": 63488
00:23:23.196      }
00:23:23.196    ]
00:23:23.196  }'
00:23:23.196    23:56:53	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:23:23.196    23:56:53	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:23.196   23:56:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:23.196    23:56:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:23.196    23:56:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:23.471   23:56:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:23.471    "name": "raid_bdev1",
00:23:23.471    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:23.471    "strip_size_kb": 64,
00:23:23.471    "state": "online",
00:23:23.471    "raid_level": "raid5f",
00:23:23.471    "superblock": true,
00:23:23.471    "num_base_bdevs": 3,
00:23:23.471    "num_base_bdevs_discovered": 3,
00:23:23.471    "num_base_bdevs_operational": 3,
00:23:23.471    "base_bdevs_list": [
00:23:23.471      {
00:23:23.471        "name": "spare",
00:23:23.471        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:23.471        "is_configured": true,
00:23:23.471        "data_offset": 2048,
00:23:23.471        "data_size": 63488
00:23:23.471      },
00:23:23.471      {
00:23:23.471        "name": "BaseBdev2",
00:23:23.471        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:23.471        "is_configured": true,
00:23:23.471        "data_offset": 2048,
00:23:23.471        "data_size": 63488
00:23:23.471      },
00:23:23.471      {
00:23:23.471        "name": "BaseBdev3",
00:23:23.471        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:23.471        "is_configured": true,
00:23:23.471        "data_offset": 2048,
00:23:23.471        "data_size": 63488
00:23:23.471      }
00:23:23.471    ]
00:23:23.471  }'
00:23:23.471   23:56:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:23.471   23:56:54	-- common/autotest_common.sh@10 -- # set +x
00:23:24.052   23:56:54	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:23:24.052  [2024-12-13 23:56:54.772225] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:24.052  [2024-12-13 23:56:54.772251] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:24.052  [2024-12-13 23:56:54.772315] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:24.052  [2024-12-13 23:56:54.772391] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:24.052  [2024-12-13 23:56:54.772402] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:23:24.310    23:56:54	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:24.310    23:56:54	-- bdev/bdev_raid.sh@671 -- # jq length
00:23:24.310   23:56:55	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:23:24.310   23:56:55	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:23:24.310   23:56:55	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@12 -- # local i
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:23:24.310   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:24.311   23:56:55	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:23:24.570  /dev/nbd0
00:23:24.570    23:56:55	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:23:24.570   23:56:55	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:23:24.570   23:56:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:23:24.570   23:56:55	-- common/autotest_common.sh@867 -- # local i
00:23:24.570   23:56:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:24.570   23:56:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:24.570   23:56:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:23:24.570   23:56:55	-- common/autotest_common.sh@871 -- # break
00:23:24.570   23:56:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:24.570   23:56:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:24.570   23:56:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:24.570  1+0 records in
00:23:24.570  1+0 records out
00:23:24.570  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296439 s, 13.8 MB/s
00:23:24.570    23:56:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:24.570   23:56:55	-- common/autotest_common.sh@884 -- # size=4096
00:23:24.570   23:56:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:24.570   23:56:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:24.570   23:56:55	-- common/autotest_common.sh@887 -- # return 0
00:23:24.570   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:24.570   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:24.570   23:56:55	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:23:24.828  /dev/nbd1
00:23:24.828    23:56:55	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:23:24.828   23:56:55	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:23:24.828   23:56:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:23:24.828   23:56:55	-- common/autotest_common.sh@867 -- # local i
00:23:24.828   23:56:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:24.828   23:56:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:24.828   23:56:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:23:24.828   23:56:55	-- common/autotest_common.sh@871 -- # break
00:23:24.828   23:56:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:24.828   23:56:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:24.828   23:56:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:24.828  1+0 records in
00:23:24.828  1+0 records out
00:23:24.828  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250773 s, 16.3 MB/s
00:23:24.828    23:56:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:24.828   23:56:55	-- common/autotest_common.sh@884 -- # size=4096
00:23:24.828   23:56:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:24.828   23:56:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:24.828   23:56:55	-- common/autotest_common.sh@887 -- # return 0
00:23:24.828   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:24.828   23:56:55	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:24.828   23:56:55	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:23:25.086   23:56:55	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@51 -- # local i
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:25.086   23:56:55	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:23:25.345    23:56:55	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@41 -- # break
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@45 -- # return 0
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:25.345   23:56:55	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:23:25.603    23:56:56	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@41 -- # break
00:23:25.603   23:56:56	-- bdev/nbd_common.sh@45 -- # return 0
00:23:25.603   23:56:56	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:23:25.603   23:56:56	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:25.603   23:56:56	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:23:25.603   23:56:56	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:23:25.861   23:56:56	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:23:26.120  [2024-12-13 23:56:56.650636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:23:26.120  [2024-12-13 23:56:56.650726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:26.120  [2024-12-13 23:56:56.650762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:23:26.120  [2024-12-13 23:56:56.650790] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:26.120  [2024-12-13 23:56:56.652984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:26.120  [2024-12-13 23:56:56.653053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:23:26.120  [2024-12-13 23:56:56.653149] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:23:26.120  [2024-12-13 23:56:56.653215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:26.120  BaseBdev1
00:23:26.120   23:56:56	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:26.120   23:56:56	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:23:26.120   23:56:56	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:23:26.120   23:56:56	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:23:26.378  [2024-12-13 23:56:57.014672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:23:26.378  [2024-12-13 23:56:57.014725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:26.378  [2024-12-13 23:56:57.014759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:23:26.378  [2024-12-13 23:56:57.014778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:26.378  [2024-12-13 23:56:57.015135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:26.378  [2024-12-13 23:56:57.015184] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:23:26.378  [2024-12-13 23:56:57.015263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:23:26.378  [2024-12-13 23:56:57.015277] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:23:26.378  [2024-12-13 23:56:57.015284] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:26.379  [2024-12-13 23:56:57.015311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring
00:23:26.379  [2024-12-13 23:56:57.015379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:26.379  BaseBdev2
00:23:26.379   23:56:57	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:26.379   23:56:57	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:23:26.379   23:56:57	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:23:26.637   23:56:57	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:23:26.895  [2024-12-13 23:56:57.374737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:23:26.895  [2024-12-13 23:56:57.374791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:26.895  [2024-12-13 23:56:57.374825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:23:26.895  [2024-12-13 23:56:57.374844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:26.895  [2024-12-13 23:56:57.375201] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:26.895  [2024-12-13 23:56:57.375259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:23:26.895  [2024-12-13 23:56:57.375338] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:23:26.895  [2024-12-13 23:56:57.375358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:26.895  BaseBdev3
00:23:26.895   23:56:57	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:23:26.895   23:56:57	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:23:27.154  [2024-12-13 23:56:57.750818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:23:27.154  [2024-12-13 23:56:57.750877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:27.154  [2024-12-13 23:56:57.750908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:23:27.154  [2024-12-13 23:56:57.750934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:27.154  [2024-12-13 23:56:57.751317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:27.154  [2024-12-13 23:56:57.751388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:23:27.154  [2024-12-13 23:56:57.751469] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:23:27.154  [2024-12-13 23:56:57.751497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:23:27.154  spare
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:27.154   23:56:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:27.154    23:56:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:27.154    23:56:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:27.154  [2024-12-13 23:56:57.851599] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480
00:23:27.154  [2024-12-13 23:56:57.851624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:23:27.154  [2024-12-13 23:56:57.851723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0
00:23:27.154  [2024-12-13 23:56:57.855725] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480
00:23:27.154  [2024-12-13 23:56:57.855749] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480
00:23:27.154  [2024-12-13 23:56:57.855899] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:27.412   23:56:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:27.412    "name": "raid_bdev1",
00:23:27.412    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:27.412    "strip_size_kb": 64,
00:23:27.412    "state": "online",
00:23:27.412    "raid_level": "raid5f",
00:23:27.412    "superblock": true,
00:23:27.412    "num_base_bdevs": 3,
00:23:27.412    "num_base_bdevs_discovered": 3,
00:23:27.412    "num_base_bdevs_operational": 3,
00:23:27.412    "base_bdevs_list": [
00:23:27.412      {
00:23:27.412        "name": "spare",
00:23:27.412        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:27.412        "is_configured": true,
00:23:27.412        "data_offset": 2048,
00:23:27.412        "data_size": 63488
00:23:27.412      },
00:23:27.412      {
00:23:27.412        "name": "BaseBdev2",
00:23:27.412        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:27.412        "is_configured": true,
00:23:27.412        "data_offset": 2048,
00:23:27.412        "data_size": 63488
00:23:27.412      },
00:23:27.412      {
00:23:27.412        "name": "BaseBdev3",
00:23:27.412        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:27.412        "is_configured": true,
00:23:27.412        "data_offset": 2048,
00:23:27.412        "data_size": 63488
00:23:27.412      }
00:23:27.412    ]
00:23:27.412  }'
00:23:27.412   23:56:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:27.412   23:56:57	-- common/autotest_common.sh@10 -- # set +x
00:23:27.979   23:56:58	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:23:27.979   23:56:58	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:27.979   23:56:58	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:23:27.979   23:56:58	-- bdev/bdev_raid.sh@185 -- # local target=none
00:23:27.979   23:56:58	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:27.979    23:56:58	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:27.979    23:56:58	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:28.237   23:56:58	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:28.237    "name": "raid_bdev1",
00:23:28.237    "uuid": "73785e7f-cb04-4f82-b777-2ccfdff57c7a",
00:23:28.237    "strip_size_kb": 64,
00:23:28.237    "state": "online",
00:23:28.237    "raid_level": "raid5f",
00:23:28.237    "superblock": true,
00:23:28.237    "num_base_bdevs": 3,
00:23:28.237    "num_base_bdevs_discovered": 3,
00:23:28.237    "num_base_bdevs_operational": 3,
00:23:28.237    "base_bdevs_list": [
00:23:28.237      {
00:23:28.237        "name": "spare",
00:23:28.237        "uuid": "3b6c239c-bd3f-5168-8688-52a291ea349c",
00:23:28.237        "is_configured": true,
00:23:28.237        "data_offset": 2048,
00:23:28.237        "data_size": 63488
00:23:28.237      },
00:23:28.237      {
00:23:28.237        "name": "BaseBdev2",
00:23:28.237        "uuid": "1096f5c3-cdfe-5767-adf9-abcd65a32a59",
00:23:28.237        "is_configured": true,
00:23:28.237        "data_offset": 2048,
00:23:28.237        "data_size": 63488
00:23:28.237      },
00:23:28.237      {
00:23:28.237        "name": "BaseBdev3",
00:23:28.237        "uuid": "76f2e77f-86d9-576a-af97-82a7b20a78ff",
00:23:28.237        "is_configured": true,
00:23:28.237        "data_offset": 2048,
00:23:28.237        "data_size": 63488
00:23:28.237      }
00:23:28.237    ]
00:23:28.237  }'
00:23:28.237    23:56:58	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:28.237   23:56:58	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:23:28.237    23:56:58	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:28.237   23:56:58	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:23:28.237    23:56:58	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:28.237    23:56:58	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:23:28.495   23:56:59	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:23:28.495   23:56:59	-- bdev/bdev_raid.sh@709 -- # killprocess 128686
00:23:28.495   23:56:59	-- common/autotest_common.sh@936 -- # '[' -z 128686 ']'
00:23:28.495   23:56:59	-- common/autotest_common.sh@940 -- # kill -0 128686
00:23:28.495    23:56:59	-- common/autotest_common.sh@941 -- # uname
00:23:28.495   23:56:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:28.495    23:56:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128686
00:23:28.754  killing process with pid 128686
00:23:28.754  Received shutdown signal, test time was about 60.000000 seconds
00:23:28.754  
00:23:28.754                                                                                                  Latency(us)
00:23:28.754  
[2024-12-13T23:56:59.486Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:28.754  
[2024-12-13T23:56:59.486Z]  ===================================================================================================================
00:23:28.754  
[2024-12-13T23:56:59.486Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:28.754   23:56:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:28.754   23:56:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:28.754   23:56:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 128686'
00:23:28.754   23:56:59	-- common/autotest_common.sh@955 -- # kill 128686
00:23:28.754  [2024-12-13 23:56:59.234114] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:28.754  [2024-12-13 23:56:59.234168] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:28.754   23:56:59	-- common/autotest_common.sh@960 -- # wait 128686
00:23:28.754  [2024-12-13 23:56:59.234231] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:28.754  [2024-12-13 23:56:59.234243] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline
00:23:29.013  [2024-12-13 23:56:59.498084] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:29.948  ************************************
00:23:29.948  END TEST raid5f_rebuild_test_sb
00:23:29.948  ************************************
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@711 -- # return 0
00:23:29.948  
00:23:29.948  real	0m23.742s
00:23:29.948  user	0m37.129s
00:23:29.948  sys	0m2.608s
00:23:29.948   23:57:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:29.948   23:57:00	-- common/autotest_common.sh@10 -- # set +x
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@743 -- # for n in {3..4}
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false
00:23:29.948   23:57:00	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:23:29.948   23:57:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:29.948   23:57:00	-- common/autotest_common.sh@10 -- # set +x
00:23:29.948  ************************************
00:23:29.948  START TEST raid5f_state_function_test
00:23:29.948  ************************************
00:23:29.948   23:57:00	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:29.948    23:57:00	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@226 -- # raid_pid=129312
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129312'
00:23:29.948  Process raid pid: 129312
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@228 -- # waitforlisten 129312 /var/tmp/spdk-raid.sock
00:23:29.948   23:57:00	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:23:29.948   23:57:00	-- common/autotest_common.sh@829 -- # '[' -z 129312 ']'
00:23:29.948   23:57:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:29.948   23:57:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:29.948   23:57:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:29.948  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:29.948   23:57:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:29.948   23:57:00	-- common/autotest_common.sh@10 -- # set +x
00:23:29.948  [2024-12-13 23:57:00.649113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:29.948  [2024-12-13 23:57:00.649321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:30.206  [2024-12-13 23:57:00.816766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:30.465  [2024-12-13 23:57:00.997757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:30.465  [2024-12-13 23:57:01.185687] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:31.032   23:57:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:31.032   23:57:01	-- common/autotest_common.sh@862 -- # return 0
00:23:31.032   23:57:01	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:31.032  [2024-12-13 23:57:01.753184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:31.032  [2024-12-13 23:57:01.753276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:31.032  [2024-12-13 23:57:01.753289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:31.032  [2024-12-13 23:57:01.753312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:31.032  [2024-12-13 23:57:01.753319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:31.032  [2024-12-13 23:57:01.753355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:31.032  [2024-12-13 23:57:01.753364] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:31.032  [2024-12-13 23:57:01.753385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:31.290    23:57:01	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:31.290    23:57:01	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:31.290    "name": "Existed_Raid",
00:23:31.290    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:31.290    "strip_size_kb": 64,
00:23:31.290    "state": "configuring",
00:23:31.290    "raid_level": "raid5f",
00:23:31.290    "superblock": false,
00:23:31.290    "num_base_bdevs": 4,
00:23:31.290    "num_base_bdevs_discovered": 0,
00:23:31.290    "num_base_bdevs_operational": 4,
00:23:31.290    "base_bdevs_list": [
00:23:31.290      {
00:23:31.290        "name": "BaseBdev1",
00:23:31.290        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:31.290        "is_configured": false,
00:23:31.290        "data_offset": 0,
00:23:31.290        "data_size": 0
00:23:31.290      },
00:23:31.290      {
00:23:31.290        "name": "BaseBdev2",
00:23:31.290        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:31.290        "is_configured": false,
00:23:31.290        "data_offset": 0,
00:23:31.290        "data_size": 0
00:23:31.290      },
00:23:31.290      {
00:23:31.290        "name": "BaseBdev3",
00:23:31.290        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:31.290        "is_configured": false,
00:23:31.290        "data_offset": 0,
00:23:31.290        "data_size": 0
00:23:31.290      },
00:23:31.290      {
00:23:31.290        "name": "BaseBdev4",
00:23:31.290        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:31.290        "is_configured": false,
00:23:31.290        "data_offset": 0,
00:23:31.290        "data_size": 0
00:23:31.290      }
00:23:31.290    ]
00:23:31.290  }'
00:23:31.290   23:57:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:31.290   23:57:01	-- common/autotest_common.sh@10 -- # set +x
00:23:31.857   23:57:02	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:32.115  [2024-12-13 23:57:02.729219] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:32.115  [2024-12-13 23:57:02.729255] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:23:32.115   23:57:02	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:32.374  [2024-12-13 23:57:02.989294] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:32.374  [2024-12-13 23:57:02.989351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:32.374  [2024-12-13 23:57:02.989362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:32.374  [2024-12-13 23:57:02.989387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:32.374  [2024-12-13 23:57:02.989394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:32.374  [2024-12-13 23:57:02.989428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:32.374  [2024-12-13 23:57:02.989435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:32.374  [2024-12-13 23:57:02.989457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:32.374   23:57:02	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:32.632  [2024-12-13 23:57:03.262889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:32.632  BaseBdev1
00:23:32.632   23:57:03	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:23:32.632   23:57:03	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:32.632   23:57:03	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:32.632   23:57:03	-- common/autotest_common.sh@899 -- # local i
00:23:32.632   23:57:03	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:32.632   23:57:03	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:32.632   23:57:03	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:32.891   23:57:03	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:33.148  [
00:23:33.148    {
00:23:33.148      "name": "BaseBdev1",
00:23:33.149      "aliases": [
00:23:33.149        "fefefa51-ea95-4c41-9301-945d12cb5bc6"
00:23:33.149      ],
00:23:33.149      "product_name": "Malloc disk",
00:23:33.149      "block_size": 512,
00:23:33.149      "num_blocks": 65536,
00:23:33.149      "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:33.149      "assigned_rate_limits": {
00:23:33.149        "rw_ios_per_sec": 0,
00:23:33.149        "rw_mbytes_per_sec": 0,
00:23:33.149        "r_mbytes_per_sec": 0,
00:23:33.149        "w_mbytes_per_sec": 0
00:23:33.149      },
00:23:33.149      "claimed": true,
00:23:33.149      "claim_type": "exclusive_write",
00:23:33.149      "zoned": false,
00:23:33.149      "supported_io_types": {
00:23:33.149        "read": true,
00:23:33.149        "write": true,
00:23:33.149        "unmap": true,
00:23:33.149        "write_zeroes": true,
00:23:33.149        "flush": true,
00:23:33.149        "reset": true,
00:23:33.149        "compare": false,
00:23:33.149        "compare_and_write": false,
00:23:33.149        "abort": true,
00:23:33.149        "nvme_admin": false,
00:23:33.149        "nvme_io": false
00:23:33.149      },
00:23:33.149      "memory_domains": [
00:23:33.149        {
00:23:33.149          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:33.149          "dma_device_type": 2
00:23:33.149        }
00:23:33.149      ],
00:23:33.149      "driver_specific": {}
00:23:33.149    }
00:23:33.149  ]
00:23:33.149   23:57:03	-- common/autotest_common.sh@905 -- # return 0
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:33.149    23:57:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:33.149    23:57:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:33.149    "name": "Existed_Raid",
00:23:33.149    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:33.149    "strip_size_kb": 64,
00:23:33.149    "state": "configuring",
00:23:33.149    "raid_level": "raid5f",
00:23:33.149    "superblock": false,
00:23:33.149    "num_base_bdevs": 4,
00:23:33.149    "num_base_bdevs_discovered": 1,
00:23:33.149    "num_base_bdevs_operational": 4,
00:23:33.149    "base_bdevs_list": [
00:23:33.149      {
00:23:33.149        "name": "BaseBdev1",
00:23:33.149        "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:33.149        "is_configured": true,
00:23:33.149        "data_offset": 0,
00:23:33.149        "data_size": 65536
00:23:33.149      },
00:23:33.149      {
00:23:33.149        "name": "BaseBdev2",
00:23:33.149        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:33.149        "is_configured": false,
00:23:33.149        "data_offset": 0,
00:23:33.149        "data_size": 0
00:23:33.149      },
00:23:33.149      {
00:23:33.149        "name": "BaseBdev3",
00:23:33.149        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:33.149        "is_configured": false,
00:23:33.149        "data_offset": 0,
00:23:33.149        "data_size": 0
00:23:33.149      },
00:23:33.149      {
00:23:33.149        "name": "BaseBdev4",
00:23:33.149        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:33.149        "is_configured": false,
00:23:33.149        "data_offset": 0,
00:23:33.149        "data_size": 0
00:23:33.149      }
00:23:33.149    ]
00:23:33.149  }'
00:23:33.149   23:57:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:33.149   23:57:03	-- common/autotest_common.sh@10 -- # set +x
00:23:34.084   23:57:04	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:34.084  [2024-12-13 23:57:04.643111] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:34.084  [2024-12-13 23:57:04.643361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:23:34.084   23:57:04	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:23:34.084   23:57:04	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:34.343  [2024-12-13 23:57:04.887201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:34.343  [2024-12-13 23:57:04.889219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:34.343  [2024-12-13 23:57:04.889408] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:34.343  [2024-12-13 23:57:04.889542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:34.343  [2024-12-13 23:57:04.889730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:34.343  [2024-12-13 23:57:04.889825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:34.343  [2024-12-13 23:57:04.889936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:34.343   23:57:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:34.343    23:57:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:34.343    23:57:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:34.601   23:57:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:34.601    "name": "Existed_Raid",
00:23:34.601    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:34.601    "strip_size_kb": 64,
00:23:34.601    "state": "configuring",
00:23:34.601    "raid_level": "raid5f",
00:23:34.601    "superblock": false,
00:23:34.601    "num_base_bdevs": 4,
00:23:34.601    "num_base_bdevs_discovered": 1,
00:23:34.601    "num_base_bdevs_operational": 4,
00:23:34.601    "base_bdevs_list": [
00:23:34.601      {
00:23:34.601        "name": "BaseBdev1",
00:23:34.601        "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:34.601        "is_configured": true,
00:23:34.601        "data_offset": 0,
00:23:34.601        "data_size": 65536
00:23:34.601      },
00:23:34.601      {
00:23:34.601        "name": "BaseBdev2",
00:23:34.601        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:34.601        "is_configured": false,
00:23:34.601        "data_offset": 0,
00:23:34.601        "data_size": 0
00:23:34.601      },
00:23:34.601      {
00:23:34.601        "name": "BaseBdev3",
00:23:34.601        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:34.601        "is_configured": false,
00:23:34.601        "data_offset": 0,
00:23:34.601        "data_size": 0
00:23:34.601      },
00:23:34.601      {
00:23:34.601        "name": "BaseBdev4",
00:23:34.601        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:34.601        "is_configured": false,
00:23:34.601        "data_offset": 0,
00:23:34.601        "data_size": 0
00:23:34.601      }
00:23:34.601    ]
00:23:34.601  }'
00:23:34.601   23:57:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:34.601   23:57:05	-- common/autotest_common.sh@10 -- # set +x
00:23:35.167   23:57:05	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:23:35.426  [2024-12-13 23:57:06.010616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:35.426  BaseBdev2
00:23:35.426   23:57:06	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:23:35.426   23:57:06	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:23:35.426   23:57:06	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:35.426   23:57:06	-- common/autotest_common.sh@899 -- # local i
00:23:35.426   23:57:06	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:35.426   23:57:06	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:35.426   23:57:06	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:35.684   23:57:06	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:23:35.942  [
00:23:35.942    {
00:23:35.942      "name": "BaseBdev2",
00:23:35.942      "aliases": [
00:23:35.942        "1971b6ff-501e-4827-8462-6fa2a165ec84"
00:23:35.942      ],
00:23:35.942      "product_name": "Malloc disk",
00:23:35.942      "block_size": 512,
00:23:35.942      "num_blocks": 65536,
00:23:35.943      "uuid": "1971b6ff-501e-4827-8462-6fa2a165ec84",
00:23:35.943      "assigned_rate_limits": {
00:23:35.943        "rw_ios_per_sec": 0,
00:23:35.943        "rw_mbytes_per_sec": 0,
00:23:35.943        "r_mbytes_per_sec": 0,
00:23:35.943        "w_mbytes_per_sec": 0
00:23:35.943      },
00:23:35.943      "claimed": true,
00:23:35.943      "claim_type": "exclusive_write",
00:23:35.943      "zoned": false,
00:23:35.943      "supported_io_types": {
00:23:35.943        "read": true,
00:23:35.943        "write": true,
00:23:35.943        "unmap": true,
00:23:35.943        "write_zeroes": true,
00:23:35.943        "flush": true,
00:23:35.943        "reset": true,
00:23:35.943        "compare": false,
00:23:35.943        "compare_and_write": false,
00:23:35.943        "abort": true,
00:23:35.943        "nvme_admin": false,
00:23:35.943        "nvme_io": false
00:23:35.943      },
00:23:35.943      "memory_domains": [
00:23:35.943        {
00:23:35.943          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:35.943          "dma_device_type": 2
00:23:35.943        }
00:23:35.943      ],
00:23:35.943      "driver_specific": {}
00:23:35.943    }
00:23:35.943  ]
00:23:35.943   23:57:06	-- common/autotest_common.sh@905 -- # return 0
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:35.943   23:57:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:35.943    23:57:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:35.943    23:57:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:36.201   23:57:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:36.201    "name": "Existed_Raid",
00:23:36.201    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:36.201    "strip_size_kb": 64,
00:23:36.201    "state": "configuring",
00:23:36.201    "raid_level": "raid5f",
00:23:36.201    "superblock": false,
00:23:36.201    "num_base_bdevs": 4,
00:23:36.201    "num_base_bdevs_discovered": 2,
00:23:36.201    "num_base_bdevs_operational": 4,
00:23:36.201    "base_bdevs_list": [
00:23:36.201      {
00:23:36.201        "name": "BaseBdev1",
00:23:36.201        "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:36.201        "is_configured": true,
00:23:36.201        "data_offset": 0,
00:23:36.201        "data_size": 65536
00:23:36.201      },
00:23:36.201      {
00:23:36.201        "name": "BaseBdev2",
00:23:36.201        "uuid": "1971b6ff-501e-4827-8462-6fa2a165ec84",
00:23:36.201        "is_configured": true,
00:23:36.201        "data_offset": 0,
00:23:36.201        "data_size": 65536
00:23:36.201      },
00:23:36.201      {
00:23:36.201        "name": "BaseBdev3",
00:23:36.201        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:36.201        "is_configured": false,
00:23:36.201        "data_offset": 0,
00:23:36.201        "data_size": 0
00:23:36.201      },
00:23:36.201      {
00:23:36.201        "name": "BaseBdev4",
00:23:36.201        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:36.201        "is_configured": false,
00:23:36.201        "data_offset": 0,
00:23:36.201        "data_size": 0
00:23:36.201      }
00:23:36.201    ]
00:23:36.201  }'
00:23:36.201   23:57:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:36.201   23:57:06	-- common/autotest_common.sh@10 -- # set +x
00:23:36.768   23:57:07	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:23:37.026  [2024-12-13 23:57:07.582956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:37.026  BaseBdev3
00:23:37.026   23:57:07	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:23:37.026   23:57:07	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:23:37.026   23:57:07	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:37.026   23:57:07	-- common/autotest_common.sh@899 -- # local i
00:23:37.026   23:57:07	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:37.026   23:57:07	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:37.026   23:57:07	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:37.284   23:57:07	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:23:37.284  [
00:23:37.284    {
00:23:37.284      "name": "BaseBdev3",
00:23:37.284      "aliases": [
00:23:37.284        "8bb030ba-950c-49b3-ad1b-993036f86afb"
00:23:37.284      ],
00:23:37.284      "product_name": "Malloc disk",
00:23:37.284      "block_size": 512,
00:23:37.284      "num_blocks": 65536,
00:23:37.284      "uuid": "8bb030ba-950c-49b3-ad1b-993036f86afb",
00:23:37.284      "assigned_rate_limits": {
00:23:37.284        "rw_ios_per_sec": 0,
00:23:37.284        "rw_mbytes_per_sec": 0,
00:23:37.284        "r_mbytes_per_sec": 0,
00:23:37.284        "w_mbytes_per_sec": 0
00:23:37.284      },
00:23:37.284      "claimed": true,
00:23:37.284      "claim_type": "exclusive_write",
00:23:37.284      "zoned": false,
00:23:37.284      "supported_io_types": {
00:23:37.284        "read": true,
00:23:37.284        "write": true,
00:23:37.284        "unmap": true,
00:23:37.284        "write_zeroes": true,
00:23:37.284        "flush": true,
00:23:37.284        "reset": true,
00:23:37.284        "compare": false,
00:23:37.284        "compare_and_write": false,
00:23:37.284        "abort": true,
00:23:37.284        "nvme_admin": false,
00:23:37.284        "nvme_io": false
00:23:37.284      },
00:23:37.284      "memory_domains": [
00:23:37.284        {
00:23:37.284          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:37.284          "dma_device_type": 2
00:23:37.284        }
00:23:37.284      ],
00:23:37.284      "driver_specific": {}
00:23:37.284    }
00:23:37.284  ]
00:23:37.284   23:57:07	-- common/autotest_common.sh@905 -- # return 0
00:23:37.284   23:57:07	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:37.284   23:57:07	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:37.285   23:57:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:37.285    23:57:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:37.285    23:57:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:37.543   23:57:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:37.543    "name": "Existed_Raid",
00:23:37.543    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:37.543    "strip_size_kb": 64,
00:23:37.543    "state": "configuring",
00:23:37.543    "raid_level": "raid5f",
00:23:37.543    "superblock": false,
00:23:37.543    "num_base_bdevs": 4,
00:23:37.543    "num_base_bdevs_discovered": 3,
00:23:37.543    "num_base_bdevs_operational": 4,
00:23:37.543    "base_bdevs_list": [
00:23:37.543      {
00:23:37.543        "name": "BaseBdev1",
00:23:37.543        "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:37.543        "is_configured": true,
00:23:37.543        "data_offset": 0,
00:23:37.543        "data_size": 65536
00:23:37.543      },
00:23:37.543      {
00:23:37.543        "name": "BaseBdev2",
00:23:37.543        "uuid": "1971b6ff-501e-4827-8462-6fa2a165ec84",
00:23:37.543        "is_configured": true,
00:23:37.543        "data_offset": 0,
00:23:37.543        "data_size": 65536
00:23:37.543      },
00:23:37.543      {
00:23:37.543        "name": "BaseBdev3",
00:23:37.543        "uuid": "8bb030ba-950c-49b3-ad1b-993036f86afb",
00:23:37.543        "is_configured": true,
00:23:37.543        "data_offset": 0,
00:23:37.543        "data_size": 65536
00:23:37.543      },
00:23:37.543      {
00:23:37.543        "name": "BaseBdev4",
00:23:37.543        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:37.543        "is_configured": false,
00:23:37.543        "data_offset": 0,
00:23:37.543        "data_size": 0
00:23:37.543      }
00:23:37.543    ]
00:23:37.543  }'
00:23:37.543   23:57:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:37.543   23:57:08	-- common/autotest_common.sh@10 -- # set +x
00:23:38.111   23:57:08	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:23:38.369  [2024-12-13 23:57:09.074583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:23:38.369  [2024-12-13 23:57:09.074802] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80
00:23:38.369  [2024-12-13 23:57:09.074847] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:23:38.369  [2024-12-13 23:57:09.075088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790
00:23:38.369  [2024-12-13 23:57:09.080809] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80
00:23:38.369  [2024-12-13 23:57:09.080947] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80
00:23:38.369  [2024-12-13 23:57:09.081276] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:38.369  BaseBdev4
00:23:38.369   23:57:09	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:23:38.369   23:57:09	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:23:38.369   23:57:09	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:38.369   23:57:09	-- common/autotest_common.sh@899 -- # local i
00:23:38.369   23:57:09	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:38.369   23:57:09	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:38.369   23:57:09	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:38.628   23:57:09	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:23:38.887  [
00:23:38.887    {
00:23:38.887      "name": "BaseBdev4",
00:23:38.887      "aliases": [
00:23:38.887        "7c0806bd-c795-4ccf-80a2-6478f963e87b"
00:23:38.887      ],
00:23:38.887      "product_name": "Malloc disk",
00:23:38.887      "block_size": 512,
00:23:38.887      "num_blocks": 65536,
00:23:38.887      "uuid": "7c0806bd-c795-4ccf-80a2-6478f963e87b",
00:23:38.887      "assigned_rate_limits": {
00:23:38.887        "rw_ios_per_sec": 0,
00:23:38.887        "rw_mbytes_per_sec": 0,
00:23:38.887        "r_mbytes_per_sec": 0,
00:23:38.887        "w_mbytes_per_sec": 0
00:23:38.887      },
00:23:38.887      "claimed": true,
00:23:38.887      "claim_type": "exclusive_write",
00:23:38.887      "zoned": false,
00:23:38.887      "supported_io_types": {
00:23:38.887        "read": true,
00:23:38.887        "write": true,
00:23:38.887        "unmap": true,
00:23:38.887        "write_zeroes": true,
00:23:38.887        "flush": true,
00:23:38.887        "reset": true,
00:23:38.887        "compare": false,
00:23:38.887        "compare_and_write": false,
00:23:38.887        "abort": true,
00:23:38.887        "nvme_admin": false,
00:23:38.887        "nvme_io": false
00:23:38.887      },
00:23:38.887      "memory_domains": [
00:23:38.887        {
00:23:38.887          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:38.887          "dma_device_type": 2
00:23:38.887        }
00:23:38.887      ],
00:23:38.887      "driver_specific": {}
00:23:38.887    }
00:23:38.887  ]
00:23:38.887   23:57:09	-- common/autotest_common.sh@905 -- # return 0
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:38.887   23:57:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:38.887    23:57:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:38.887    23:57:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:39.146   23:57:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:39.146    "name": "Existed_Raid",
00:23:39.146    "uuid": "cae582f7-b29a-43da-b527-a5b8809c3e43",
00:23:39.146    "strip_size_kb": 64,
00:23:39.146    "state": "online",
00:23:39.146    "raid_level": "raid5f",
00:23:39.146    "superblock": false,
00:23:39.146    "num_base_bdevs": 4,
00:23:39.146    "num_base_bdevs_discovered": 4,
00:23:39.146    "num_base_bdevs_operational": 4,
00:23:39.146    "base_bdevs_list": [
00:23:39.146      {
00:23:39.146        "name": "BaseBdev1",
00:23:39.146        "uuid": "fefefa51-ea95-4c41-9301-945d12cb5bc6",
00:23:39.146        "is_configured": true,
00:23:39.146        "data_offset": 0,
00:23:39.146        "data_size": 65536
00:23:39.146      },
00:23:39.146      {
00:23:39.146        "name": "BaseBdev2",
00:23:39.146        "uuid": "1971b6ff-501e-4827-8462-6fa2a165ec84",
00:23:39.146        "is_configured": true,
00:23:39.146        "data_offset": 0,
00:23:39.146        "data_size": 65536
00:23:39.146      },
00:23:39.146      {
00:23:39.146        "name": "BaseBdev3",
00:23:39.146        "uuid": "8bb030ba-950c-49b3-ad1b-993036f86afb",
00:23:39.146        "is_configured": true,
00:23:39.146        "data_offset": 0,
00:23:39.146        "data_size": 65536
00:23:39.146      },
00:23:39.146      {
00:23:39.146        "name": "BaseBdev4",
00:23:39.146        "uuid": "7c0806bd-c795-4ccf-80a2-6478f963e87b",
00:23:39.146        "is_configured": true,
00:23:39.146        "data_offset": 0,
00:23:39.146        "data_size": 65536
00:23:39.146      }
00:23:39.146    ]
00:23:39.146  }'
00:23:39.146   23:57:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:39.146   23:57:09	-- common/autotest_common.sh@10 -- # set +x
00:23:39.713   23:57:10	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:39.972  [2024-12-13 23:57:10.465443] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@196 -- # return 0
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:39.972   23:57:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:39.972    23:57:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:39.972    23:57:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:40.230   23:57:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:40.230    "name": "Existed_Raid",
00:23:40.230    "uuid": "cae582f7-b29a-43da-b527-a5b8809c3e43",
00:23:40.230    "strip_size_kb": 64,
00:23:40.230    "state": "online",
00:23:40.230    "raid_level": "raid5f",
00:23:40.230    "superblock": false,
00:23:40.230    "num_base_bdevs": 4,
00:23:40.230    "num_base_bdevs_discovered": 3,
00:23:40.230    "num_base_bdevs_operational": 3,
00:23:40.230    "base_bdevs_list": [
00:23:40.230      {
00:23:40.230        "name": null,
00:23:40.230        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:40.230        "is_configured": false,
00:23:40.230        "data_offset": 0,
00:23:40.230        "data_size": 65536
00:23:40.230      },
00:23:40.230      {
00:23:40.230        "name": "BaseBdev2",
00:23:40.230        "uuid": "1971b6ff-501e-4827-8462-6fa2a165ec84",
00:23:40.230        "is_configured": true,
00:23:40.230        "data_offset": 0,
00:23:40.230        "data_size": 65536
00:23:40.230      },
00:23:40.230      {
00:23:40.230        "name": "BaseBdev3",
00:23:40.230        "uuid": "8bb030ba-950c-49b3-ad1b-993036f86afb",
00:23:40.230        "is_configured": true,
00:23:40.230        "data_offset": 0,
00:23:40.230        "data_size": 65536
00:23:40.230      },
00:23:40.230      {
00:23:40.230        "name": "BaseBdev4",
00:23:40.230        "uuid": "7c0806bd-c795-4ccf-80a2-6478f963e87b",
00:23:40.230        "is_configured": true,
00:23:40.230        "data_offset": 0,
00:23:40.230        "data_size": 65536
00:23:40.230      }
00:23:40.230    ]
00:23:40.230  }'
00:23:40.230   23:57:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:40.230   23:57:10	-- common/autotest_common.sh@10 -- # set +x
00:23:40.796   23:57:11	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:23:40.796   23:57:11	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:40.796    23:57:11	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:40.796    23:57:11	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:41.055   23:57:11	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:41.055   23:57:11	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:41.055   23:57:11	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:23:41.324  [2024-12-13 23:57:11.920200] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:23:41.324  [2024-12-13 23:57:11.920560] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:41.324  [2024-12-13 23:57:11.920987] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:41.324   23:57:12	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:41.324   23:57:12	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:41.324    23:57:12	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:41.324    23:57:12	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:41.601   23:57:12	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:41.601   23:57:12	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:41.601   23:57:12	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:23:41.859  [2024-12-13 23:57:12.487002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:23:41.859   23:57:12	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:41.859   23:57:12	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:41.859    23:57:12	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:41.859    23:57:12	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:42.118   23:57:12	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:42.118   23:57:12	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:42.118   23:57:12	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:23:42.377  [2024-12-13 23:57:13.041431] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:23:42.377  [2024-12-13 23:57:13.041644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline
00:23:42.636   23:57:13	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:42.636   23:57:13	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:42.636    23:57:13	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:42.636    23:57:13	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:23:42.894   23:57:13	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:23:42.894   23:57:13	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:23:42.894   23:57:13	-- bdev/bdev_raid.sh@287 -- # killprocess 129312
00:23:42.894   23:57:13	-- common/autotest_common.sh@936 -- # '[' -z 129312 ']'
00:23:42.894   23:57:13	-- common/autotest_common.sh@940 -- # kill -0 129312
00:23:42.894    23:57:13	-- common/autotest_common.sh@941 -- # uname
00:23:42.895   23:57:13	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:42.895    23:57:13	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129312
00:23:42.895   23:57:13	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:42.895   23:57:13	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:42.895   23:57:13	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 129312'
00:23:42.895  killing process with pid 129312
00:23:42.895   23:57:13	-- common/autotest_common.sh@955 -- # kill 129312
00:23:42.895  [2024-12-13 23:57:13.393457] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:42.895   23:57:13	-- common/autotest_common.sh@960 -- # wait 129312
00:23:42.895  [2024-12-13 23:57:13.393762] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@289 -- # return 0
00:23:43.832  
00:23:43.832  real	0m13.752s
00:23:43.832  user	0m24.429s
00:23:43.832  sys	0m1.721s
00:23:43.832   23:57:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:43.832   23:57:14	-- common/autotest_common.sh@10 -- # set +x
00:23:43.832  ************************************
00:23:43.832  END TEST raid5f_state_function_test
00:23:43.832  ************************************
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true
00:23:43.832   23:57:14	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:23:43.832   23:57:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:43.832   23:57:14	-- common/autotest_common.sh@10 -- # set +x
00:23:43.832  ************************************
00:23:43.832  START TEST raid5f_state_function_test_sb
00:23:43.832  ************************************
00:23:43.832   23:57:14	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:43.832    23:57:14	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@226 -- # raid_pid=129753
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129753'
00:23:43.832  Process raid pid: 129753
00:23:43.832   23:57:14	-- bdev/bdev_raid.sh@228 -- # waitforlisten 129753 /var/tmp/spdk-raid.sock
00:23:43.832   23:57:14	-- common/autotest_common.sh@829 -- # '[' -z 129753 ']'
00:23:43.832   23:57:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:43.832   23:57:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:43.832   23:57:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:43.832  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:43.832   23:57:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:43.832   23:57:14	-- common/autotest_common.sh@10 -- # set +x
00:23:43.832  [2024-12-13 23:57:14.466246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:43.832  [2024-12-13 23:57:14.466655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:44.091  [2024-12-13 23:57:14.642161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:44.351  [2024-12-13 23:57:14.857548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:44.351  [2024-12-13 23:57:15.025010] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:44.917   23:57:15	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:44.917   23:57:15	-- common/autotest_common.sh@862 -- # return 0
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:44.917  [2024-12-13 23:57:15.621757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:44.917  [2024-12-13 23:57:15.621986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:44.917  [2024-12-13 23:57:15.622094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:44.917  [2024-12-13 23:57:15.622227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:44.917  [2024-12-13 23:57:15.622322] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:44.917  [2024-12-13 23:57:15.622401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:44.917  [2024-12-13 23:57:15.622637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:44.917  [2024-12-13 23:57:15.622699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:44.917   23:57:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:44.917    23:57:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:44.917    23:57:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:45.175   23:57:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:45.175    "name": "Existed_Raid",
00:23:45.175    "uuid": "6352e2f7-2af5-48c8-922c-a28f51ae2801",
00:23:45.175    "strip_size_kb": 64,
00:23:45.175    "state": "configuring",
00:23:45.175    "raid_level": "raid5f",
00:23:45.175    "superblock": true,
00:23:45.175    "num_base_bdevs": 4,
00:23:45.175    "num_base_bdevs_discovered": 0,
00:23:45.175    "num_base_bdevs_operational": 4,
00:23:45.175    "base_bdevs_list": [
00:23:45.175      {
00:23:45.175        "name": "BaseBdev1",
00:23:45.175        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:45.175        "is_configured": false,
00:23:45.175        "data_offset": 0,
00:23:45.175        "data_size": 0
00:23:45.175      },
00:23:45.175      {
00:23:45.175        "name": "BaseBdev2",
00:23:45.175        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:45.175        "is_configured": false,
00:23:45.175        "data_offset": 0,
00:23:45.175        "data_size": 0
00:23:45.175      },
00:23:45.175      {
00:23:45.175        "name": "BaseBdev3",
00:23:45.175        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:45.175        "is_configured": false,
00:23:45.175        "data_offset": 0,
00:23:45.175        "data_size": 0
00:23:45.175      },
00:23:45.175      {
00:23:45.175        "name": "BaseBdev4",
00:23:45.175        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:45.176        "is_configured": false,
00:23:45.176        "data_offset": 0,
00:23:45.176        "data_size": 0
00:23:45.176      }
00:23:45.176    ]
00:23:45.176  }'
00:23:45.176   23:57:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:45.176   23:57:15	-- common/autotest_common.sh@10 -- # set +x
00:23:45.743   23:57:16	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:46.001  [2024-12-13 23:57:16.661841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:46.001  [2024-12-13 23:57:16.662049] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring
00:23:46.001   23:57:16	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:46.260  [2024-12-13 23:57:16.845907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:46.260  [2024-12-13 23:57:16.846089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:46.260  [2024-12-13 23:57:16.846189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:46.260  [2024-12-13 23:57:16.846256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:46.260  [2024-12-13 23:57:16.846348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:46.260  [2024-12-13 23:57:16.846423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:46.260  [2024-12-13 23:57:16.846455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:46.260  [2024-12-13 23:57:16.846562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:46.260   23:57:16	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:46.518  [2024-12-13 23:57:17.115118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:46.518  BaseBdev1
00:23:46.518   23:57:17	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:23:46.519   23:57:17	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:46.519   23:57:17	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:46.519   23:57:17	-- common/autotest_common.sh@899 -- # local i
00:23:46.519   23:57:17	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:46.519   23:57:17	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:46.519   23:57:17	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:46.777   23:57:17	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:47.036  [
00:23:47.036    {
00:23:47.036      "name": "BaseBdev1",
00:23:47.036      "aliases": [
00:23:47.036        "3211e493-9c3d-4f06-98b9-2bfb38be4dec"
00:23:47.036      ],
00:23:47.036      "product_name": "Malloc disk",
00:23:47.036      "block_size": 512,
00:23:47.036      "num_blocks": 65536,
00:23:47.036      "uuid": "3211e493-9c3d-4f06-98b9-2bfb38be4dec",
00:23:47.036      "assigned_rate_limits": {
00:23:47.036        "rw_ios_per_sec": 0,
00:23:47.036        "rw_mbytes_per_sec": 0,
00:23:47.036        "r_mbytes_per_sec": 0,
00:23:47.036        "w_mbytes_per_sec": 0
00:23:47.036      },
00:23:47.036      "claimed": true,
00:23:47.036      "claim_type": "exclusive_write",
00:23:47.036      "zoned": false,
00:23:47.036      "supported_io_types": {
00:23:47.036        "read": true,
00:23:47.036        "write": true,
00:23:47.036        "unmap": true,
00:23:47.036        "write_zeroes": true,
00:23:47.036        "flush": true,
00:23:47.036        "reset": true,
00:23:47.036        "compare": false,
00:23:47.036        "compare_and_write": false,
00:23:47.036        "abort": true,
00:23:47.036        "nvme_admin": false,
00:23:47.036        "nvme_io": false
00:23:47.036      },
00:23:47.036      "memory_domains": [
00:23:47.036        {
00:23:47.036          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:47.036          "dma_device_type": 2
00:23:47.036        }
00:23:47.036      ],
00:23:47.036      "driver_specific": {}
00:23:47.036    }
00:23:47.036  ]
00:23:47.036   23:57:17	-- common/autotest_common.sh@905 -- # return 0
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:47.036    23:57:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:47.036    23:57:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:47.036   23:57:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:47.036    "name": "Existed_Raid",
00:23:47.036    "uuid": "cd12e537-8480-4762-a3bc-a0d1ab316e69",
00:23:47.036    "strip_size_kb": 64,
00:23:47.036    "state": "configuring",
00:23:47.036    "raid_level": "raid5f",
00:23:47.036    "superblock": true,
00:23:47.036    "num_base_bdevs": 4,
00:23:47.037    "num_base_bdevs_discovered": 1,
00:23:47.037    "num_base_bdevs_operational": 4,
00:23:47.037    "base_bdevs_list": [
00:23:47.037      {
00:23:47.037        "name": "BaseBdev1",
00:23:47.037        "uuid": "3211e493-9c3d-4f06-98b9-2bfb38be4dec",
00:23:47.037        "is_configured": true,
00:23:47.037        "data_offset": 2048,
00:23:47.037        "data_size": 63488
00:23:47.037      },
00:23:47.037      {
00:23:47.037        "name": "BaseBdev2",
00:23:47.037        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:47.037        "is_configured": false,
00:23:47.037        "data_offset": 0,
00:23:47.037        "data_size": 0
00:23:47.037      },
00:23:47.037      {
00:23:47.037        "name": "BaseBdev3",
00:23:47.037        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:47.037        "is_configured": false,
00:23:47.037        "data_offset": 0,
00:23:47.037        "data_size": 0
00:23:47.037      },
00:23:47.037      {
00:23:47.037        "name": "BaseBdev4",
00:23:47.037        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:47.037        "is_configured": false,
00:23:47.037        "data_offset": 0,
00:23:47.037        "data_size": 0
00:23:47.037      }
00:23:47.037    ]
00:23:47.037  }'
00:23:47.037   23:57:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:47.037   23:57:17	-- common/autotest_common.sh@10 -- # set +x
00:23:47.972   23:57:18	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:47.972  [2024-12-13 23:57:18.511379] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:47.973  [2024-12-13 23:57:18.511598] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring
00:23:47.973   23:57:18	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:23:47.973   23:57:18	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:48.231   23:57:18	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:48.490  BaseBdev1
00:23:48.490   23:57:19	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:23:48.490   23:57:19	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:48.490   23:57:19	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:48.490   23:57:19	-- common/autotest_common.sh@899 -- # local i
00:23:48.490   23:57:19	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:48.490   23:57:19	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:48.490   23:57:19	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:48.749   23:57:19	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:48.749  [
00:23:48.749    {
00:23:48.749      "name": "BaseBdev1",
00:23:48.749      "aliases": [
00:23:48.749        "e21fc9e0-92d2-4216-8015-c4845d8d3535"
00:23:48.749      ],
00:23:48.749      "product_name": "Malloc disk",
00:23:48.749      "block_size": 512,
00:23:48.749      "num_blocks": 65536,
00:23:48.749      "uuid": "e21fc9e0-92d2-4216-8015-c4845d8d3535",
00:23:48.749      "assigned_rate_limits": {
00:23:48.749        "rw_ios_per_sec": 0,
00:23:48.749        "rw_mbytes_per_sec": 0,
00:23:48.749        "r_mbytes_per_sec": 0,
00:23:48.749        "w_mbytes_per_sec": 0
00:23:48.749      },
00:23:48.749      "claimed": false,
00:23:48.749      "zoned": false,
00:23:48.749      "supported_io_types": {
00:23:48.749        "read": true,
00:23:48.749        "write": true,
00:23:48.749        "unmap": true,
00:23:48.749        "write_zeroes": true,
00:23:48.749        "flush": true,
00:23:48.749        "reset": true,
00:23:48.749        "compare": false,
00:23:48.749        "compare_and_write": false,
00:23:48.749        "abort": true,
00:23:48.749        "nvme_admin": false,
00:23:48.749        "nvme_io": false
00:23:48.749      },
00:23:48.749      "memory_domains": [
00:23:48.749        {
00:23:48.749          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:48.749          "dma_device_type": 2
00:23:48.749        }
00:23:48.749      ],
00:23:48.749      "driver_specific": {}
00:23:48.749    }
00:23:48.749  ]
00:23:48.749   23:57:19	-- common/autotest_common.sh@905 -- # return 0
00:23:48.749   23:57:19	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:49.008  [2024-12-13 23:57:19.646143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:49.008  [2024-12-13 23:57:19.648090] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:49.008  [2024-12-13 23:57:19.648281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:49.008  [2024-12-13 23:57:19.648392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:49.008  [2024-12-13 23:57:19.648459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:49.008  [2024-12-13 23:57:19.648648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:49.008  [2024-12-13 23:57:19.648710] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:49.008   23:57:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:49.008    23:57:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:49.008    23:57:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:49.267   23:57:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:49.267    "name": "Existed_Raid",
00:23:49.267    "uuid": "9a92f69b-2cab-45c4-9c24-deebf6468675",
00:23:49.267    "strip_size_kb": 64,
00:23:49.267    "state": "configuring",
00:23:49.267    "raid_level": "raid5f",
00:23:49.267    "superblock": true,
00:23:49.267    "num_base_bdevs": 4,
00:23:49.267    "num_base_bdevs_discovered": 1,
00:23:49.267    "num_base_bdevs_operational": 4,
00:23:49.267    "base_bdevs_list": [
00:23:49.267      {
00:23:49.267        "name": "BaseBdev1",
00:23:49.267        "uuid": "e21fc9e0-92d2-4216-8015-c4845d8d3535",
00:23:49.267        "is_configured": true,
00:23:49.267        "data_offset": 2048,
00:23:49.267        "data_size": 63488
00:23:49.267      },
00:23:49.267      {
00:23:49.267        "name": "BaseBdev2",
00:23:49.267        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:49.267        "is_configured": false,
00:23:49.267        "data_offset": 0,
00:23:49.267        "data_size": 0
00:23:49.267      },
00:23:49.267      {
00:23:49.267        "name": "BaseBdev3",
00:23:49.267        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:49.267        "is_configured": false,
00:23:49.267        "data_offset": 0,
00:23:49.267        "data_size": 0
00:23:49.267      },
00:23:49.267      {
00:23:49.267        "name": "BaseBdev4",
00:23:49.267        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:49.267        "is_configured": false,
00:23:49.267        "data_offset": 0,
00:23:49.267        "data_size": 0
00:23:49.267      }
00:23:49.267    ]
00:23:49.267  }'
00:23:49.267   23:57:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:49.267   23:57:19	-- common/autotest_common.sh@10 -- # set +x
00:23:49.834   23:57:20	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:23:50.093  [2024-12-13 23:57:20.619158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:50.093  BaseBdev2
00:23:50.093   23:57:20	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:23:50.093   23:57:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:23:50.093   23:57:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:50.093   23:57:20	-- common/autotest_common.sh@899 -- # local i
00:23:50.093   23:57:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:50.093   23:57:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:50.093   23:57:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:50.351   23:57:20	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:23:50.351  [
00:23:50.351    {
00:23:50.351      "name": "BaseBdev2",
00:23:50.351      "aliases": [
00:23:50.351        "4175e5a7-8050-45b2-8026-9746bce60886"
00:23:50.351      ],
00:23:50.351      "product_name": "Malloc disk",
00:23:50.351      "block_size": 512,
00:23:50.351      "num_blocks": 65536,
00:23:50.351      "uuid": "4175e5a7-8050-45b2-8026-9746bce60886",
00:23:50.351      "assigned_rate_limits": {
00:23:50.351        "rw_ios_per_sec": 0,
00:23:50.351        "rw_mbytes_per_sec": 0,
00:23:50.351        "r_mbytes_per_sec": 0,
00:23:50.351        "w_mbytes_per_sec": 0
00:23:50.351      },
00:23:50.351      "claimed": true,
00:23:50.351      "claim_type": "exclusive_write",
00:23:50.351      "zoned": false,
00:23:50.351      "supported_io_types": {
00:23:50.351        "read": true,
00:23:50.351        "write": true,
00:23:50.351        "unmap": true,
00:23:50.351        "write_zeroes": true,
00:23:50.351        "flush": true,
00:23:50.351        "reset": true,
00:23:50.351        "compare": false,
00:23:50.351        "compare_and_write": false,
00:23:50.351        "abort": true,
00:23:50.351        "nvme_admin": false,
00:23:50.351        "nvme_io": false
00:23:50.351      },
00:23:50.351      "memory_domains": [
00:23:50.351        {
00:23:50.351          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:50.351          "dma_device_type": 2
00:23:50.351        }
00:23:50.351      ],
00:23:50.351      "driver_specific": {}
00:23:50.351    }
00:23:50.351  ]
00:23:50.352   23:57:21	-- common/autotest_common.sh@905 -- # return 0
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:50.352   23:57:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:50.352    23:57:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:50.352    23:57:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:50.610   23:57:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:50.610    "name": "Existed_Raid",
00:23:50.610    "uuid": "9a92f69b-2cab-45c4-9c24-deebf6468675",
00:23:50.610    "strip_size_kb": 64,
00:23:50.610    "state": "configuring",
00:23:50.610    "raid_level": "raid5f",
00:23:50.610    "superblock": true,
00:23:50.610    "num_base_bdevs": 4,
00:23:50.610    "num_base_bdevs_discovered": 2,
00:23:50.610    "num_base_bdevs_operational": 4,
00:23:50.610    "base_bdevs_list": [
00:23:50.610      {
00:23:50.610        "name": "BaseBdev1",
00:23:50.610        "uuid": "e21fc9e0-92d2-4216-8015-c4845d8d3535",
00:23:50.610        "is_configured": true,
00:23:50.610        "data_offset": 2048,
00:23:50.610        "data_size": 63488
00:23:50.610      },
00:23:50.610      {
00:23:50.610        "name": "BaseBdev2",
00:23:50.610        "uuid": "4175e5a7-8050-45b2-8026-9746bce60886",
00:23:50.610        "is_configured": true,
00:23:50.610        "data_offset": 2048,
00:23:50.610        "data_size": 63488
00:23:50.610      },
00:23:50.610      {
00:23:50.610        "name": "BaseBdev3",
00:23:50.610        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:50.610        "is_configured": false,
00:23:50.610        "data_offset": 0,
00:23:50.610        "data_size": 0
00:23:50.610      },
00:23:50.610      {
00:23:50.610        "name": "BaseBdev4",
00:23:50.610        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:50.610        "is_configured": false,
00:23:50.610        "data_offset": 0,
00:23:50.610        "data_size": 0
00:23:50.610      }
00:23:50.610    ]
00:23:50.610  }'
00:23:50.610   23:57:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:50.610   23:57:21	-- common/autotest_common.sh@10 -- # set +x
00:23:51.546   23:57:21	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:23:51.546  [2024-12-13 23:57:22.135065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:51.546  BaseBdev3
00:23:51.546   23:57:22	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:23:51.546   23:57:22	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:23:51.546   23:57:22	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:51.546   23:57:22	-- common/autotest_common.sh@899 -- # local i
00:23:51.546   23:57:22	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:51.546   23:57:22	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:51.546   23:57:22	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:51.806   23:57:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:23:52.065  [
00:23:52.065    {
00:23:52.065      "name": "BaseBdev3",
00:23:52.065      "aliases": [
00:23:52.065        "f0f21c1a-6725-480f-afd4-0f27d95ce67b"
00:23:52.065      ],
00:23:52.065      "product_name": "Malloc disk",
00:23:52.065      "block_size": 512,
00:23:52.065      "num_blocks": 65536,
00:23:52.065      "uuid": "f0f21c1a-6725-480f-afd4-0f27d95ce67b",
00:23:52.065      "assigned_rate_limits": {
00:23:52.065        "rw_ios_per_sec": 0,
00:23:52.065        "rw_mbytes_per_sec": 0,
00:23:52.065        "r_mbytes_per_sec": 0,
00:23:52.065        "w_mbytes_per_sec": 0
00:23:52.065      },
00:23:52.065      "claimed": true,
00:23:52.065      "claim_type": "exclusive_write",
00:23:52.065      "zoned": false,
00:23:52.065      "supported_io_types": {
00:23:52.065        "read": true,
00:23:52.065        "write": true,
00:23:52.065        "unmap": true,
00:23:52.065        "write_zeroes": true,
00:23:52.065        "flush": true,
00:23:52.065        "reset": true,
00:23:52.065        "compare": false,
00:23:52.065        "compare_and_write": false,
00:23:52.065        "abort": true,
00:23:52.065        "nvme_admin": false,
00:23:52.065        "nvme_io": false
00:23:52.065      },
00:23:52.065      "memory_domains": [
00:23:52.065        {
00:23:52.065          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:52.065          "dma_device_type": 2
00:23:52.065        }
00:23:52.065      ],
00:23:52.065      "driver_specific": {}
00:23:52.065    }
00:23:52.065  ]
00:23:52.065   23:57:22	-- common/autotest_common.sh@905 -- # return 0
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:52.065   23:57:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:52.065    23:57:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:52.065    23:57:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:52.324   23:57:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:52.324    "name": "Existed_Raid",
00:23:52.324    "uuid": "9a92f69b-2cab-45c4-9c24-deebf6468675",
00:23:52.324    "strip_size_kb": 64,
00:23:52.324    "state": "configuring",
00:23:52.324    "raid_level": "raid5f",
00:23:52.324    "superblock": true,
00:23:52.324    "num_base_bdevs": 4,
00:23:52.324    "num_base_bdevs_discovered": 3,
00:23:52.324    "num_base_bdevs_operational": 4,
00:23:52.324    "base_bdevs_list": [
00:23:52.324      {
00:23:52.324        "name": "BaseBdev1",
00:23:52.324        "uuid": "e21fc9e0-92d2-4216-8015-c4845d8d3535",
00:23:52.324        "is_configured": true,
00:23:52.324        "data_offset": 2048,
00:23:52.324        "data_size": 63488
00:23:52.324      },
00:23:52.324      {
00:23:52.324        "name": "BaseBdev2",
00:23:52.324        "uuid": "4175e5a7-8050-45b2-8026-9746bce60886",
00:23:52.324        "is_configured": true,
00:23:52.324        "data_offset": 2048,
00:23:52.324        "data_size": 63488
00:23:52.324      },
00:23:52.324      {
00:23:52.324        "name": "BaseBdev3",
00:23:52.324        "uuid": "f0f21c1a-6725-480f-afd4-0f27d95ce67b",
00:23:52.324        "is_configured": true,
00:23:52.324        "data_offset": 2048,
00:23:52.324        "data_size": 63488
00:23:52.324      },
00:23:52.324      {
00:23:52.324        "name": "BaseBdev4",
00:23:52.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:52.324        "is_configured": false,
00:23:52.324        "data_offset": 0,
00:23:52.324        "data_size": 0
00:23:52.324      }
00:23:52.324    ]
00:23:52.324  }'
00:23:52.324   23:57:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:52.324   23:57:22	-- common/autotest_common.sh@10 -- # set +x
00:23:52.892   23:57:23	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:23:53.150  [2024-12-13 23:57:23.661014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:23:53.150  [2024-12-13 23:57:23.661462] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580
00:23:53.150  [2024-12-13 23:57:23.661616] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:53.150  [2024-12-13 23:57:23.661763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860
00:23:53.150  BaseBdev4
00:23:53.150  [2024-12-13 23:57:23.667518] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580
00:23:53.150  [2024-12-13 23:57:23.667675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580
00:23:53.150  [2024-12-13 23:57:23.667984] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:53.150   23:57:23	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:23:53.150   23:57:23	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:23:53.150   23:57:23	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:53.150   23:57:23	-- common/autotest_common.sh@899 -- # local i
00:23:53.150   23:57:23	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:53.150   23:57:23	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:53.150   23:57:23	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:53.409   23:57:23	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:23:53.409  [
00:23:53.409    {
00:23:53.409      "name": "BaseBdev4",
00:23:53.409      "aliases": [
00:23:53.409        "31871ae9-35e7-41bb-b981-fef154686f6b"
00:23:53.409      ],
00:23:53.409      "product_name": "Malloc disk",
00:23:53.409      "block_size": 512,
00:23:53.409      "num_blocks": 65536,
00:23:53.409      "uuid": "31871ae9-35e7-41bb-b981-fef154686f6b",
00:23:53.409      "assigned_rate_limits": {
00:23:53.409        "rw_ios_per_sec": 0,
00:23:53.409        "rw_mbytes_per_sec": 0,
00:23:53.409        "r_mbytes_per_sec": 0,
00:23:53.409        "w_mbytes_per_sec": 0
00:23:53.409      },
00:23:53.409      "claimed": true,
00:23:53.409      "claim_type": "exclusive_write",
00:23:53.409      "zoned": false,
00:23:53.409      "supported_io_types": {
00:23:53.409        "read": true,
00:23:53.409        "write": true,
00:23:53.409        "unmap": true,
00:23:53.409        "write_zeroes": true,
00:23:53.409        "flush": true,
00:23:53.409        "reset": true,
00:23:53.409        "compare": false,
00:23:53.409        "compare_and_write": false,
00:23:53.409        "abort": true,
00:23:53.409        "nvme_admin": false,
00:23:53.409        "nvme_io": false
00:23:53.409      },
00:23:53.409      "memory_domains": [
00:23:53.409        {
00:23:53.409          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:53.409          "dma_device_type": 2
00:23:53.409        }
00:23:53.409      ],
00:23:53.409      "driver_specific": {}
00:23:53.409    }
00:23:53.409  ]
00:23:53.667   23:57:24	-- common/autotest_common.sh@905 -- # return 0
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:53.667    23:57:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:53.667    23:57:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:53.667    "name": "Existed_Raid",
00:23:53.667    "uuid": "9a92f69b-2cab-45c4-9c24-deebf6468675",
00:23:53.667    "strip_size_kb": 64,
00:23:53.667    "state": "online",
00:23:53.667    "raid_level": "raid5f",
00:23:53.667    "superblock": true,
00:23:53.667    "num_base_bdevs": 4,
00:23:53.667    "num_base_bdevs_discovered": 4,
00:23:53.667    "num_base_bdevs_operational": 4,
00:23:53.667    "base_bdevs_list": [
00:23:53.667      {
00:23:53.667        "name": "BaseBdev1",
00:23:53.667        "uuid": "e21fc9e0-92d2-4216-8015-c4845d8d3535",
00:23:53.667        "is_configured": true,
00:23:53.667        "data_offset": 2048,
00:23:53.667        "data_size": 63488
00:23:53.667      },
00:23:53.667      {
00:23:53.667        "name": "BaseBdev2",
00:23:53.667        "uuid": "4175e5a7-8050-45b2-8026-9746bce60886",
00:23:53.667        "is_configured": true,
00:23:53.667        "data_offset": 2048,
00:23:53.667        "data_size": 63488
00:23:53.667      },
00:23:53.667      {
00:23:53.667        "name": "BaseBdev3",
00:23:53.667        "uuid": "f0f21c1a-6725-480f-afd4-0f27d95ce67b",
00:23:53.667        "is_configured": true,
00:23:53.667        "data_offset": 2048,
00:23:53.667        "data_size": 63488
00:23:53.667      },
00:23:53.667      {
00:23:53.667        "name": "BaseBdev4",
00:23:53.667        "uuid": "31871ae9-35e7-41bb-b981-fef154686f6b",
00:23:53.667        "is_configured": true,
00:23:53.667        "data_offset": 2048,
00:23:53.667        "data_size": 63488
00:23:53.667      }
00:23:53.667    ]
00:23:53.667  }'
00:23:53.667   23:57:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:53.667   23:57:24	-- common/autotest_common.sh@10 -- # set +x
00:23:54.606   23:57:24	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:54.606  [2024-12-13 23:57:25.210655] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@196 -- # return 0
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:54.606   23:57:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:54.606    23:57:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:54.606    23:57:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:54.867   23:57:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:54.867    "name": "Existed_Raid",
00:23:54.867    "uuid": "9a92f69b-2cab-45c4-9c24-deebf6468675",
00:23:54.867    "strip_size_kb": 64,
00:23:54.867    "state": "online",
00:23:54.867    "raid_level": "raid5f",
00:23:54.867    "superblock": true,
00:23:54.867    "num_base_bdevs": 4,
00:23:54.867    "num_base_bdevs_discovered": 3,
00:23:54.867    "num_base_bdevs_operational": 3,
00:23:54.867    "base_bdevs_list": [
00:23:54.867      {
00:23:54.867        "name": null,
00:23:54.867        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:54.867        "is_configured": false,
00:23:54.867        "data_offset": 2048,
00:23:54.867        "data_size": 63488
00:23:54.867      },
00:23:54.867      {
00:23:54.867        "name": "BaseBdev2",
00:23:54.867        "uuid": "4175e5a7-8050-45b2-8026-9746bce60886",
00:23:54.867        "is_configured": true,
00:23:54.867        "data_offset": 2048,
00:23:54.867        "data_size": 63488
00:23:54.867      },
00:23:54.867      {
00:23:54.867        "name": "BaseBdev3",
00:23:54.867        "uuid": "f0f21c1a-6725-480f-afd4-0f27d95ce67b",
00:23:54.867        "is_configured": true,
00:23:54.867        "data_offset": 2048,
00:23:54.867        "data_size": 63488
00:23:54.867      },
00:23:54.867      {
00:23:54.867        "name": "BaseBdev4",
00:23:54.867        "uuid": "31871ae9-35e7-41bb-b981-fef154686f6b",
00:23:54.867        "is_configured": true,
00:23:54.867        "data_offset": 2048,
00:23:54.867        "data_size": 63488
00:23:54.867      }
00:23:54.867    ]
00:23:54.867  }'
00:23:54.867   23:57:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:54.867   23:57:25	-- common/autotest_common.sh@10 -- # set +x
00:23:55.435   23:57:26	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:23:55.435   23:57:26	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:55.435    23:57:26	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:55.435    23:57:26	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:55.694   23:57:26	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:55.694   23:57:26	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:55.694   23:57:26	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:23:55.952  [2024-12-13 23:57:26.446502] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:23:55.952  [2024-12-13 23:57:26.446743] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:55.952  [2024-12-13 23:57:26.447061] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:55.952   23:57:26	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:55.952   23:57:26	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:55.952    23:57:26	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:55.952    23:57:26	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:56.211   23:57:26	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:56.211   23:57:26	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:56.211   23:57:26	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:23:56.469  [2024-12-13 23:57:26.962285] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:23:56.469   23:57:27	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:56.469   23:57:27	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:56.469    23:57:27	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:56.469    23:57:27	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:56.728   23:57:27	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:56.728   23:57:27	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:56.728   23:57:27	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:23:56.987  [2024-12-13 23:57:27.514758] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:23:56.987  [2024-12-13 23:57:27.515115] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline
00:23:56.987   23:57:27	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:56.987   23:57:27	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:56.987    23:57:27	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:56.987    23:57:27	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:23:57.245   23:57:27	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:23:57.245   23:57:27	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:23:57.245   23:57:27	-- bdev/bdev_raid.sh@287 -- # killprocess 129753
00:23:57.245   23:57:27	-- common/autotest_common.sh@936 -- # '[' -z 129753 ']'
00:23:57.245   23:57:27	-- common/autotest_common.sh@940 -- # kill -0 129753
00:23:57.245    23:57:27	-- common/autotest_common.sh@941 -- # uname
00:23:57.245   23:57:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:57.245    23:57:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129753
00:23:57.246  killing process with pid 129753
00:23:57.246   23:57:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:57.246   23:57:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:57.246   23:57:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 129753'
00:23:57.246   23:57:27	-- common/autotest_common.sh@955 -- # kill 129753
00:23:57.246  [2024-12-13 23:57:27.796544] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:57.246   23:57:27	-- common/autotest_common.sh@960 -- # wait 129753
00:23:57.246  [2024-12-13 23:57:27.796651] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:58.181  ************************************
00:23:58.181  END TEST raid5f_state_function_test_sb
00:23:58.181  ************************************
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@289 -- # return 0
00:23:58.181  
00:23:58.181  real	0m14.442s
00:23:58.181  user	0m25.632s
00:23:58.181  sys	0m1.655s
00:23:58.181   23:57:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:58.181   23:57:28	-- common/autotest_common.sh@10 -- # set +x
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4
00:23:58.181   23:57:28	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:23:58.181   23:57:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:58.181   23:57:28	-- common/autotest_common.sh@10 -- # set +x
00:23:58.181  ************************************
00:23:58.181  START TEST raid5f_superblock_test
00:23:58.181  ************************************
00:23:58.181   23:57:28	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']'
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@357 -- # raid_pid=130193
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:23:58.181   23:57:28	-- bdev/bdev_raid.sh@358 -- # waitforlisten 130193 /var/tmp/spdk-raid.sock
00:23:58.181   23:57:28	-- common/autotest_common.sh@829 -- # '[' -z 130193 ']'
00:23:58.181   23:57:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:58.181   23:57:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:58.182   23:57:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:58.182  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:58.182   23:57:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:58.182   23:57:28	-- common/autotest_common.sh@10 -- # set +x
00:23:58.441  [2024-12-13 23:57:28.966380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:23:58.441  [2024-12-13 23:57:28.966839] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130193 ]
00:23:58.441  [2024-12-13 23:57:29.138345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:58.699  [2024-12-13 23:57:29.356459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:58.958  [2024-12-13 23:57:29.540984] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:59.217   23:57:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:59.217   23:57:29	-- common/autotest_common.sh@862 -- # return 0
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:59.217   23:57:29	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:23:59.476  malloc1
00:23:59.476   23:57:30	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:23:59.734  [2024-12-13 23:57:30.364702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:23:59.734  [2024-12-13 23:57:30.366680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:59.734  [2024-12-13 23:57:30.367054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:23:59.734  [2024-12-13 23:57:30.367378] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:59.734  [2024-12-13 23:57:30.372783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:59.734  [2024-12-13 23:57:30.373121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:23:59.734  pt1
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:59.734   23:57:30	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:23:59.993  malloc2
00:23:59.993   23:57:30	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:24:00.252  [2024-12-13 23:57:30.787333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:24:00.252  [2024-12-13 23:57:30.787582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:00.253  [2024-12-13 23:57:30.787662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:24:00.253  [2024-12-13 23:57:30.787961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:00.253  [2024-12-13 23:57:30.790135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:00.253  [2024-12-13 23:57:30.790306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:24:00.253  pt2
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:24:00.253   23:57:30	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:24:00.515  malloc3
00:24:00.515   23:57:31	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:24:00.515  [2024-12-13 23:57:31.241478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:24:00.515  [2024-12-13 23:57:31.241758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:00.515  [2024-12-13 23:57:31.241857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:24:00.515  [2024-12-13 23:57:31.242138] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:00.515  [2024-12-13 23:57:31.244821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:00.515  [2024-12-13 23:57:31.244996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:24:00.515  pt3
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:24:00.789  malloc4
00:24:00.789   23:57:31	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:24:01.066  [2024-12-13 23:57:31.650140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:24:01.066  [2024-12-13 23:57:31.650339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:01.066  [2024-12-13 23:57:31.650408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:24:01.066  [2024-12-13 23:57:31.650527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:01.066  [2024-12-13 23:57:31.652723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:01.066  [2024-12-13 23:57:31.652897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:24:01.066  pt4
00:24:01.066   23:57:31	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:24:01.066   23:57:31	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:24:01.066   23:57:31	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:24:01.325  [2024-12-13 23:57:31.854223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:24:01.325  [2024-12-13 23:57:31.856101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:24:01.325  [2024-12-13 23:57:31.856286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:24:01.325  [2024-12-13 23:57:31.856401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:24:01.325  [2024-12-13 23:57:31.856728] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380
00:24:01.325  [2024-12-13 23:57:31.856847] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:01.325  [2024-12-13 23:57:31.856986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:24:01.325  [2024-12-13 23:57:31.862882] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380
00:24:01.325  [2024-12-13 23:57:31.863012] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380
00:24:01.325  [2024-12-13 23:57:31.863299] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:01.325   23:57:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:01.325    23:57:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:01.325    23:57:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:01.583   23:57:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:01.583    "name": "raid_bdev1",
00:24:01.583    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:01.583    "strip_size_kb": 64,
00:24:01.583    "state": "online",
00:24:01.583    "raid_level": "raid5f",
00:24:01.583    "superblock": true,
00:24:01.583    "num_base_bdevs": 4,
00:24:01.583    "num_base_bdevs_discovered": 4,
00:24:01.583    "num_base_bdevs_operational": 4,
00:24:01.583    "base_bdevs_list": [
00:24:01.583      {
00:24:01.583        "name": "pt1",
00:24:01.583        "uuid": "3e179f83-22e2-5ab2-b628-ed86daddd658",
00:24:01.583        "is_configured": true,
00:24:01.583        "data_offset": 2048,
00:24:01.583        "data_size": 63488
00:24:01.583      },
00:24:01.583      {
00:24:01.583        "name": "pt2",
00:24:01.583        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:01.583        "is_configured": true,
00:24:01.583        "data_offset": 2048,
00:24:01.583        "data_size": 63488
00:24:01.583      },
00:24:01.583      {
00:24:01.583        "name": "pt3",
00:24:01.583        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:01.583        "is_configured": true,
00:24:01.583        "data_offset": 2048,
00:24:01.583        "data_size": 63488
00:24:01.583      },
00:24:01.583      {
00:24:01.583        "name": "pt4",
00:24:01.583        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:01.583        "is_configured": true,
00:24:01.583        "data_offset": 2048,
00:24:01.583        "data_size": 63488
00:24:01.583      }
00:24:01.583    ]
00:24:01.583  }'
00:24:01.583   23:57:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:01.583   23:57:32	-- common/autotest_common.sh@10 -- # set +x
00:24:02.150    23:57:32	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:02.150    23:57:32	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:24:02.150  [2024-12-13 23:57:32.845810] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:02.150   23:57:32	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=60a18ee5-f7c7-40ea-9b59-9eca0aa36972
00:24:02.150   23:57:32	-- bdev/bdev_raid.sh@380 -- # '[' -z 60a18ee5-f7c7-40ea-9b59-9eca0aa36972 ']'
00:24:02.150   23:57:32	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:02.409  [2024-12-13 23:57:33.085729] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:02.409  [2024-12-13 23:57:33.085860] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:02.409  [2024-12-13 23:57:33.086021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:02.409  [2024-12-13 23:57:33.086208] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:02.409  [2024-12-13 23:57:33.086311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline
00:24:02.409    23:57:33	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:02.409    23:57:33	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:24:02.667   23:57:33	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:24:02.667   23:57:33	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:24:02.667   23:57:33	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:24:02.667   23:57:33	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:24:02.926   23:57:33	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:24:02.926   23:57:33	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:24:02.926   23:57:33	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:24:02.926   23:57:33	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:24:03.184   23:57:33	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:24:03.184   23:57:33	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:24:03.442    23:57:34	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:24:03.442    23:57:34	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:24:03.701   23:57:34	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:24:03.701   23:57:34	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:24:03.701   23:57:34	-- common/autotest_common.sh@650 -- # local es=0
00:24:03.701   23:57:34	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:24:03.701   23:57:34	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:03.701   23:57:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:24:03.701    23:57:34	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:03.701   23:57:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:24:03.701    23:57:34	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:03.701   23:57:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:24:03.701   23:57:34	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:03.701   23:57:34	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:24:03.701   23:57:34	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:24:03.701  [2024-12-13 23:57:34.393941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:24:03.701  [2024-12-13 23:57:34.395836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:24:03.701  [2024-12-13 23:57:34.396021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:24:03.701  [2024-12-13 23:57:34.396097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:24:03.701  [2024-12-13 23:57:34.396245] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:24:03.701  [2024-12-13 23:57:34.396355] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:24:03.701  [2024-12-13 23:57:34.396452] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:24:03.701  [2024-12-13 23:57:34.396602] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:24:03.701  [2024-12-13 23:57:34.396717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:03.701  [2024-12-13 23:57:34.396756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring
00:24:03.701  request:
00:24:03.701  {
00:24:03.701    "name": "raid_bdev1",
00:24:03.701    "raid_level": "raid5f",
00:24:03.701    "base_bdevs": [
00:24:03.701      "malloc1",
00:24:03.701      "malloc2",
00:24:03.701      "malloc3",
00:24:03.701      "malloc4"
00:24:03.701    ],
00:24:03.701    "superblock": false,
00:24:03.701    "strip_size_kb": 64,
00:24:03.701    "method": "bdev_raid_create",
00:24:03.701    "req_id": 1
00:24:03.701  }
00:24:03.701  Got JSON-RPC error response
00:24:03.701  response:
00:24:03.701  {
00:24:03.701    "code": -17,
00:24:03.701    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:24:03.701  }
00:24:03.701   23:57:34	-- common/autotest_common.sh@653 -- # es=1
00:24:03.701   23:57:34	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:24:03.701   23:57:34	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:24:03.701   23:57:34	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:24:03.701    23:57:34	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:03.701    23:57:34	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:24:03.959   23:57:34	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:24:03.960   23:57:34	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:24:03.960   23:57:34	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:24:04.218  [2024-12-13 23:57:34.845979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:24:04.218  [2024-12-13 23:57:34.846165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:04.218  [2024-12-13 23:57:34.846231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:24:04.218  [2024-12-13 23:57:34.846347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:04.218  [2024-12-13 23:57:34.848455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:04.218  [2024-12-13 23:57:34.848634] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:24:04.218  [2024-12-13 23:57:34.848825] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:24:04.218  [2024-12-13 23:57:34.848976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:24:04.218  pt1
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:04.218   23:57:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:04.218    23:57:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:04.218    23:57:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:04.476   23:57:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:04.476    "name": "raid_bdev1",
00:24:04.476    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:04.476    "strip_size_kb": 64,
00:24:04.476    "state": "configuring",
00:24:04.476    "raid_level": "raid5f",
00:24:04.476    "superblock": true,
00:24:04.476    "num_base_bdevs": 4,
00:24:04.476    "num_base_bdevs_discovered": 1,
00:24:04.476    "num_base_bdevs_operational": 4,
00:24:04.476    "base_bdevs_list": [
00:24:04.476      {
00:24:04.476        "name": "pt1",
00:24:04.476        "uuid": "3e179f83-22e2-5ab2-b628-ed86daddd658",
00:24:04.476        "is_configured": true,
00:24:04.476        "data_offset": 2048,
00:24:04.476        "data_size": 63488
00:24:04.476      },
00:24:04.476      {
00:24:04.476        "name": null,
00:24:04.476        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:04.476        "is_configured": false,
00:24:04.476        "data_offset": 2048,
00:24:04.476        "data_size": 63488
00:24:04.476      },
00:24:04.476      {
00:24:04.476        "name": null,
00:24:04.476        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:04.477        "is_configured": false,
00:24:04.477        "data_offset": 2048,
00:24:04.477        "data_size": 63488
00:24:04.477      },
00:24:04.477      {
00:24:04.477        "name": null,
00:24:04.477        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:04.477        "is_configured": false,
00:24:04.477        "data_offset": 2048,
00:24:04.477        "data_size": 63488
00:24:04.477      }
00:24:04.477    ]
00:24:04.477  }'
00:24:04.477   23:57:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:04.477   23:57:35	-- common/autotest_common.sh@10 -- # set +x
00:24:05.047   23:57:35	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:24:05.047   23:57:35	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:24:05.305  [2024-12-13 23:57:35.870215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:24:05.305  [2024-12-13 23:57:35.870396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:05.305  [2024-12-13 23:57:35.870470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:24:05.305  [2024-12-13 23:57:35.870582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:05.305  [2024-12-13 23:57:35.871034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:05.305  [2024-12-13 23:57:35.871219] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:24:05.305  [2024-12-13 23:57:35.871437] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:24:05.305  [2024-12-13 23:57:35.871564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:24:05.305  pt2
00:24:05.306   23:57:35	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:24:05.565  [2024-12-13 23:57:36.062251] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:05.565   23:57:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:05.565    23:57:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:05.565    23:57:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:05.823   23:57:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:05.823    "name": "raid_bdev1",
00:24:05.823    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:05.823    "strip_size_kb": 64,
00:24:05.823    "state": "configuring",
00:24:05.823    "raid_level": "raid5f",
00:24:05.823    "superblock": true,
00:24:05.823    "num_base_bdevs": 4,
00:24:05.823    "num_base_bdevs_discovered": 1,
00:24:05.823    "num_base_bdevs_operational": 4,
00:24:05.823    "base_bdevs_list": [
00:24:05.823      {
00:24:05.823        "name": "pt1",
00:24:05.823        "uuid": "3e179f83-22e2-5ab2-b628-ed86daddd658",
00:24:05.823        "is_configured": true,
00:24:05.823        "data_offset": 2048,
00:24:05.823        "data_size": 63488
00:24:05.823      },
00:24:05.823      {
00:24:05.823        "name": null,
00:24:05.823        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:05.823        "is_configured": false,
00:24:05.823        "data_offset": 2048,
00:24:05.823        "data_size": 63488
00:24:05.823      },
00:24:05.823      {
00:24:05.823        "name": null,
00:24:05.823        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:05.823        "is_configured": false,
00:24:05.823        "data_offset": 2048,
00:24:05.823        "data_size": 63488
00:24:05.823      },
00:24:05.823      {
00:24:05.823        "name": null,
00:24:05.823        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:05.823        "is_configured": false,
00:24:05.823        "data_offset": 2048,
00:24:05.823        "data_size": 63488
00:24:05.823      }
00:24:05.823    ]
00:24:05.823  }'
00:24:05.823   23:57:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:05.823   23:57:36	-- common/autotest_common.sh@10 -- # set +x
00:24:06.390   23:57:36	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:24:06.390   23:57:36	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:24:06.390   23:57:36	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:24:06.648  [2024-12-13 23:57:37.214470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:24:06.648  [2024-12-13 23:57:37.214662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:06.648  [2024-12-13 23:57:37.214737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:24:06.648  [2024-12-13 23:57:37.214854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:06.648  [2024-12-13 23:57:37.215323] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:06.648  [2024-12-13 23:57:37.215565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:24:06.648  [2024-12-13 23:57:37.215825] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:24:06.648  [2024-12-13 23:57:37.215938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:24:06.648  pt2
00:24:06.648   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:24:06.648   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:24:06.648   23:57:37	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:24:06.907  [2024-12-13 23:57:37.462487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:24:06.907  [2024-12-13 23:57:37.462668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:06.907  [2024-12-13 23:57:37.462730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:24:06.907  [2024-12-13 23:57:37.462843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:06.907  [2024-12-13 23:57:37.463365] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:06.907  [2024-12-13 23:57:37.463558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:24:06.907  [2024-12-13 23:57:37.463737] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:24:06.907  [2024-12-13 23:57:37.463872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:24:06.907  pt3
00:24:06.907   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:24:06.907   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:24:06.907   23:57:37	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:24:06.907  [2024-12-13 23:57:37.634519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:24:06.907  [2024-12-13 23:57:37.634700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:06.907  [2024-12-13 23:57:37.634768] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:24:06.907  [2024-12-13 23:57:37.634880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:06.907  [2024-12-13 23:57:37.635280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:06.907  [2024-12-13 23:57:37.635470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:24:06.907  [2024-12-13 23:57:37.635689] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:24:06.907  [2024-12-13 23:57:37.635814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:24:06.907  [2024-12-13 23:57:37.636011] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:24:06.907  [2024-12-13 23:57:37.636105] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:06.907  [2024-12-13 23:57:37.636260] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:24:07.165  [2024-12-13 23:57:37.642495] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:24:07.165  [2024-12-13 23:57:37.642649] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:24:07.165  [2024-12-13 23:57:37.642955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:07.165  pt4
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:07.166   23:57:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:07.166    23:57:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:07.166    23:57:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:07.424   23:57:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:07.424    "name": "raid_bdev1",
00:24:07.424    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:07.424    "strip_size_kb": 64,
00:24:07.424    "state": "online",
00:24:07.424    "raid_level": "raid5f",
00:24:07.424    "superblock": true,
00:24:07.424    "num_base_bdevs": 4,
00:24:07.424    "num_base_bdevs_discovered": 4,
00:24:07.424    "num_base_bdevs_operational": 4,
00:24:07.424    "base_bdevs_list": [
00:24:07.424      {
00:24:07.424        "name": "pt1",
00:24:07.424        "uuid": "3e179f83-22e2-5ab2-b628-ed86daddd658",
00:24:07.424        "is_configured": true,
00:24:07.424        "data_offset": 2048,
00:24:07.424        "data_size": 63488
00:24:07.424      },
00:24:07.424      {
00:24:07.424        "name": "pt2",
00:24:07.424        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:07.424        "is_configured": true,
00:24:07.424        "data_offset": 2048,
00:24:07.424        "data_size": 63488
00:24:07.424      },
00:24:07.424      {
00:24:07.424        "name": "pt3",
00:24:07.424        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:07.424        "is_configured": true,
00:24:07.424        "data_offset": 2048,
00:24:07.424        "data_size": 63488
00:24:07.424      },
00:24:07.424      {
00:24:07.424        "name": "pt4",
00:24:07.424        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:07.424        "is_configured": true,
00:24:07.424        "data_offset": 2048,
00:24:07.424        "data_size": 63488
00:24:07.424      }
00:24:07.424    ]
00:24:07.424  }'
00:24:07.424   23:57:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:07.424   23:57:37	-- common/autotest_common.sh@10 -- # set +x
00:24:07.991    23:57:38	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:07.991    23:57:38	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:24:07.991  [2024-12-13 23:57:38.689980] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:07.991   23:57:38	-- bdev/bdev_raid.sh@430 -- # '[' 60a18ee5-f7c7-40ea-9b59-9eca0aa36972 '!=' 60a18ee5-f7c7-40ea-9b59-9eca0aa36972 ']'
00:24:07.991   23:57:38	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f
00:24:07.991   23:57:38	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:24:07.991   23:57:38	-- bdev/bdev_raid.sh@196 -- # return 0
00:24:07.991   23:57:38	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:24:08.250  [2024-12-13 23:57:38.881939] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:08.250   23:57:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:08.250    23:57:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:08.250    23:57:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:08.508   23:57:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:08.508    "name": "raid_bdev1",
00:24:08.508    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:08.508    "strip_size_kb": 64,
00:24:08.508    "state": "online",
00:24:08.508    "raid_level": "raid5f",
00:24:08.508    "superblock": true,
00:24:08.508    "num_base_bdevs": 4,
00:24:08.508    "num_base_bdevs_discovered": 3,
00:24:08.508    "num_base_bdevs_operational": 3,
00:24:08.508    "base_bdevs_list": [
00:24:08.508      {
00:24:08.508        "name": null,
00:24:08.508        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:08.508        "is_configured": false,
00:24:08.508        "data_offset": 2048,
00:24:08.508        "data_size": 63488
00:24:08.508      },
00:24:08.508      {
00:24:08.508        "name": "pt2",
00:24:08.508        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:08.508        "is_configured": true,
00:24:08.508        "data_offset": 2048,
00:24:08.508        "data_size": 63488
00:24:08.508      },
00:24:08.508      {
00:24:08.508        "name": "pt3",
00:24:08.508        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:08.508        "is_configured": true,
00:24:08.508        "data_offset": 2048,
00:24:08.508        "data_size": 63488
00:24:08.508      },
00:24:08.508      {
00:24:08.508        "name": "pt4",
00:24:08.508        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:08.508        "is_configured": true,
00:24:08.508        "data_offset": 2048,
00:24:08.508        "data_size": 63488
00:24:08.508      }
00:24:08.508    ]
00:24:08.508  }'
00:24:08.508   23:57:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:08.508   23:57:39	-- common/autotest_common.sh@10 -- # set +x
00:24:09.076   23:57:39	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:09.334  [2024-12-13 23:57:39.986164] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:09.334  [2024-12-13 23:57:39.986315] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:09.334  [2024-12-13 23:57:39.986464] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:09.334  [2024-12-13 23:57:39.986640] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:09.334  [2024-12-13 23:57:39.986750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:24:09.334    23:57:40	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:24:09.334    23:57:40	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:09.593   23:57:40	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:24:09.593   23:57:40	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:24:09.593   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:24:09.593   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:24:09.593   23:57:40	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:24:09.852   23:57:40	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:24:10.110   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:24:10.110   23:57:40	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:24:10.110   23:57:40	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:24:10.110   23:57:40	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:24:10.110   23:57:40	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:24:10.369  [2024-12-13 23:57:40.902277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:24:10.369  [2024-12-13 23:57:40.902464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:10.369  [2024-12-13 23:57:40.902530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:24:10.369  [2024-12-13 23:57:40.902656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:10.369  [2024-12-13 23:57:40.904889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:10.370  [2024-12-13 23:57:40.905064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:24:10.370  [2024-12-13 23:57:40.905261] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:24:10.370  [2024-12-13 23:57:40.905396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:24:10.370  pt2
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:10.370   23:57:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:10.370    23:57:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:10.370    23:57:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:10.628   23:57:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:10.628    "name": "raid_bdev1",
00:24:10.628    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:10.628    "strip_size_kb": 64,
00:24:10.628    "state": "configuring",
00:24:10.628    "raid_level": "raid5f",
00:24:10.628    "superblock": true,
00:24:10.628    "num_base_bdevs": 4,
00:24:10.628    "num_base_bdevs_discovered": 1,
00:24:10.628    "num_base_bdevs_operational": 3,
00:24:10.628    "base_bdevs_list": [
00:24:10.628      {
00:24:10.628        "name": null,
00:24:10.628        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:10.628        "is_configured": false,
00:24:10.628        "data_offset": 2048,
00:24:10.628        "data_size": 63488
00:24:10.628      },
00:24:10.628      {
00:24:10.628        "name": "pt2",
00:24:10.628        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:10.628        "is_configured": true,
00:24:10.628        "data_offset": 2048,
00:24:10.628        "data_size": 63488
00:24:10.628      },
00:24:10.628      {
00:24:10.628        "name": null,
00:24:10.628        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:10.628        "is_configured": false,
00:24:10.628        "data_offset": 2048,
00:24:10.628        "data_size": 63488
00:24:10.628      },
00:24:10.628      {
00:24:10.629        "name": null,
00:24:10.629        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:10.629        "is_configured": false,
00:24:10.629        "data_offset": 2048,
00:24:10.629        "data_size": 63488
00:24:10.629      }
00:24:10.629    ]
00:24:10.629  }'
00:24:10.629   23:57:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:10.629   23:57:41	-- common/autotest_common.sh@10 -- # set +x
00:24:11.196   23:57:41	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:24:11.196   23:57:41	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:24:11.196   23:57:41	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:24:11.196  [2024-12-13 23:57:41.918455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:24:11.196  [2024-12-13 23:57:41.918642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:11.196  [2024-12-13 23:57:41.918715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:24:11.196  [2024-12-13 23:57:41.918827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:11.196  [2024-12-13 23:57:41.919240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:11.196  [2024-12-13 23:57:41.919407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:24:11.196  [2024-12-13 23:57:41.919650] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:24:11.196  [2024-12-13 23:57:41.919806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:24:11.196  pt3
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:11.454   23:57:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:11.454    23:57:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:11.454    23:57:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:11.714   23:57:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:11.714    "name": "raid_bdev1",
00:24:11.714    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:11.714    "strip_size_kb": 64,
00:24:11.714    "state": "configuring",
00:24:11.714    "raid_level": "raid5f",
00:24:11.714    "superblock": true,
00:24:11.714    "num_base_bdevs": 4,
00:24:11.714    "num_base_bdevs_discovered": 2,
00:24:11.714    "num_base_bdevs_operational": 3,
00:24:11.714    "base_bdevs_list": [
00:24:11.714      {
00:24:11.714        "name": null,
00:24:11.714        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:11.714        "is_configured": false,
00:24:11.714        "data_offset": 2048,
00:24:11.714        "data_size": 63488
00:24:11.714      },
00:24:11.714      {
00:24:11.714        "name": "pt2",
00:24:11.714        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:11.714        "is_configured": true,
00:24:11.714        "data_offset": 2048,
00:24:11.714        "data_size": 63488
00:24:11.714      },
00:24:11.714      {
00:24:11.714        "name": "pt3",
00:24:11.714        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:11.714        "is_configured": true,
00:24:11.714        "data_offset": 2048,
00:24:11.714        "data_size": 63488
00:24:11.714      },
00:24:11.714      {
00:24:11.714        "name": null,
00:24:11.714        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:11.714        "is_configured": false,
00:24:11.714        "data_offset": 2048,
00:24:11.714        "data_size": 63488
00:24:11.714      }
00:24:11.714    ]
00:24:11.714  }'
00:24:11.714   23:57:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:11.714   23:57:42	-- common/autotest_common.sh@10 -- # set +x
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@462 -- # i=3
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:24:12.282  [2024-12-13 23:57:42.942692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:24:12.282  [2024-12-13 23:57:42.942913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:12.282  [2024-12-13 23:57:42.942987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:24:12.282  [2024-12-13 23:57:42.943259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:12.282  [2024-12-13 23:57:42.943849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:12.282  [2024-12-13 23:57:42.944019] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:24:12.282  [2024-12-13 23:57:42.944211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:24:12.282  [2024-12-13 23:57:42.944326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:24:12.282  [2024-12-13 23:57:42.944506] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80
00:24:12.282  [2024-12-13 23:57:42.944612] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:12.282  [2024-12-13 23:57:42.944766] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0
00:24:12.282  [2024-12-13 23:57:42.950753] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80
00:24:12.282  [2024-12-13 23:57:42.950903] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80
00:24:12.282  [2024-12-13 23:57:42.951250] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:12.282  pt4
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:12.282   23:57:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:12.282    23:57:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:12.282    23:57:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:12.540   23:57:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:12.540    "name": "raid_bdev1",
00:24:12.540    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:12.540    "strip_size_kb": 64,
00:24:12.540    "state": "online",
00:24:12.540    "raid_level": "raid5f",
00:24:12.540    "superblock": true,
00:24:12.540    "num_base_bdevs": 4,
00:24:12.540    "num_base_bdevs_discovered": 3,
00:24:12.540    "num_base_bdevs_operational": 3,
00:24:12.540    "base_bdevs_list": [
00:24:12.540      {
00:24:12.540        "name": null,
00:24:12.540        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:12.540        "is_configured": false,
00:24:12.540        "data_offset": 2048,
00:24:12.540        "data_size": 63488
00:24:12.540      },
00:24:12.540      {
00:24:12.540        "name": "pt2",
00:24:12.540        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:12.540        "is_configured": true,
00:24:12.540        "data_offset": 2048,
00:24:12.540        "data_size": 63488
00:24:12.540      },
00:24:12.540      {
00:24:12.540        "name": "pt3",
00:24:12.540        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:12.540        "is_configured": true,
00:24:12.540        "data_offset": 2048,
00:24:12.540        "data_size": 63488
00:24:12.540      },
00:24:12.540      {
00:24:12.540        "name": "pt4",
00:24:12.541        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:12.541        "is_configured": true,
00:24:12.541        "data_offset": 2048,
00:24:12.541        "data_size": 63488
00:24:12.541      }
00:24:12.541    ]
00:24:12.541  }'
00:24:12.541   23:57:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:12.541   23:57:43	-- common/autotest_common.sh@10 -- # set +x
00:24:13.477   23:57:43	-- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']'
00:24:13.477   23:57:43	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:13.477  [2024-12-13 23:57:44.061995] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:13.477  [2024-12-13 23:57:44.062160] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:13.477  [2024-12-13 23:57:44.062301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:13.477  [2024-12-13 23:57:44.062473] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:13.477  [2024-12-13 23:57:44.062575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline
00:24:13.477    23:57:44	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:24:13.477    23:57:44	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:13.736   23:57:44	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:24:13.736   23:57:44	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:24:13.736   23:57:44	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:24:13.995  [2024-12-13 23:57:44.498105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:24:13.995  [2024-12-13 23:57:44.498306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:13.995  [2024-12-13 23:57:44.498377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:24:13.995  [2024-12-13 23:57:44.498525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:13.995  [2024-12-13 23:57:44.500680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:13.995  [2024-12-13 23:57:44.500858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:24:13.995  [2024-12-13 23:57:44.501072] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:24:13.995  [2024-12-13 23:57:44.501210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:24:13.995  pt1
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:13.995    23:57:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:13.995    23:57:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:13.995    "name": "raid_bdev1",
00:24:13.995    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:13.995    "strip_size_kb": 64,
00:24:13.995    "state": "configuring",
00:24:13.995    "raid_level": "raid5f",
00:24:13.995    "superblock": true,
00:24:13.995    "num_base_bdevs": 4,
00:24:13.995    "num_base_bdevs_discovered": 1,
00:24:13.995    "num_base_bdevs_operational": 4,
00:24:13.995    "base_bdevs_list": [
00:24:13.995      {
00:24:13.995        "name": "pt1",
00:24:13.995        "uuid": "3e179f83-22e2-5ab2-b628-ed86daddd658",
00:24:13.995        "is_configured": true,
00:24:13.995        "data_offset": 2048,
00:24:13.995        "data_size": 63488
00:24:13.995      },
00:24:13.995      {
00:24:13.995        "name": null,
00:24:13.995        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:13.995        "is_configured": false,
00:24:13.995        "data_offset": 2048,
00:24:13.995        "data_size": 63488
00:24:13.995      },
00:24:13.995      {
00:24:13.995        "name": null,
00:24:13.995        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:13.995        "is_configured": false,
00:24:13.995        "data_offset": 2048,
00:24:13.995        "data_size": 63488
00:24:13.995      },
00:24:13.995      {
00:24:13.995        "name": null,
00:24:13.995        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:13.995        "is_configured": false,
00:24:13.995        "data_offset": 2048,
00:24:13.995        "data_size": 63488
00:24:13.995      }
00:24:13.995    ]
00:24:13.995  }'
00:24:13.995   23:57:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:13.995   23:57:44	-- common/autotest_common.sh@10 -- # set +x
00:24:14.562   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:24:14.562   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:24:14.562   23:57:45	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:24:14.820   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:24:14.820   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:24:14.820   23:57:45	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:24:15.079   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:24:15.079   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:24:15.079   23:57:45	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:24:15.337   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:24:15.337   23:57:45	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:24:15.338   23:57:45	-- bdev/bdev_raid.sh@489 -- # i=3
00:24:15.338   23:57:45	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:24:15.597  [2024-12-13 23:57:46.102464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:24:15.597  [2024-12-13 23:57:46.102661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:15.597  [2024-12-13 23:57:46.102727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80
00:24:15.597  [2024-12-13 23:57:46.102851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:15.597  [2024-12-13 23:57:46.103260] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:15.597  [2024-12-13 23:57:46.103445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:24:15.597  [2024-12-13 23:57:46.103665] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:24:15.597  [2024-12-13 23:57:46.103780] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2)
00:24:15.597  [2024-12-13 23:57:46.103888] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:15.597  [2024-12-13 23:57:46.104007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring
00:24:15.597  [2024-12-13 23:57:46.104168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:24:15.597  pt4
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:15.597    23:57:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:15.597    23:57:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:15.597    "name": "raid_bdev1",
00:24:15.597    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:15.597    "strip_size_kb": 64,
00:24:15.597    "state": "configuring",
00:24:15.597    "raid_level": "raid5f",
00:24:15.597    "superblock": true,
00:24:15.597    "num_base_bdevs": 4,
00:24:15.597    "num_base_bdevs_discovered": 1,
00:24:15.597    "num_base_bdevs_operational": 3,
00:24:15.597    "base_bdevs_list": [
00:24:15.597      {
00:24:15.597        "name": null,
00:24:15.597        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:15.597        "is_configured": false,
00:24:15.597        "data_offset": 2048,
00:24:15.597        "data_size": 63488
00:24:15.597      },
00:24:15.597      {
00:24:15.597        "name": null,
00:24:15.597        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:15.597        "is_configured": false,
00:24:15.597        "data_offset": 2048,
00:24:15.597        "data_size": 63488
00:24:15.597      },
00:24:15.597      {
00:24:15.597        "name": null,
00:24:15.597        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:15.597        "is_configured": false,
00:24:15.597        "data_offset": 2048,
00:24:15.597        "data_size": 63488
00:24:15.597      },
00:24:15.597      {
00:24:15.597        "name": "pt4",
00:24:15.597        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:15.597        "is_configured": true,
00:24:15.597        "data_offset": 2048,
00:24:15.597        "data_size": 63488
00:24:15.597      }
00:24:15.597    ]
00:24:15.597  }'
00:24:15.597   23:57:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:15.597   23:57:46	-- common/autotest_common.sh@10 -- # set +x
00:24:16.533   23:57:46	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:24:16.533   23:57:46	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:24:16.533   23:57:46	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:24:16.533  [2024-12-13 23:57:47.149120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:24:16.533  [2024-12-13 23:57:47.149359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:16.533  [2024-12-13 23:57:47.149438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280
00:24:16.533  [2024-12-13 23:57:47.149683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:16.533  [2024-12-13 23:57:47.150173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:16.533  [2024-12-13 23:57:47.150369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:24:16.533  [2024-12-13 23:57:47.150567] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:24:16.533  [2024-12-13 23:57:47.150702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:24:16.533  pt2
00:24:16.533   23:57:47	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:24:16.533   23:57:47	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:24:16.533   23:57:47	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:24:16.792  [2024-12-13 23:57:47.341148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:24:16.792  [2024-12-13 23:57:47.341348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:16.792  [2024-12-13 23:57:47.341414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580
00:24:16.792  [2024-12-13 23:57:47.341537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:16.792  [2024-12-13 23:57:47.341943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:16.792  [2024-12-13 23:57:47.342129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:24:16.792  [2024-12-13 23:57:47.342335] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:24:16.792  [2024-12-13 23:57:47.342467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:24:16.792  [2024-12-13 23:57:47.342624] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80
00:24:16.792  [2024-12-13 23:57:47.342732] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:16.792  [2024-12-13 23:57:47.342861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700
00:24:16.792  [2024-12-13 23:57:47.348371] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80
00:24:16.792  [2024-12-13 23:57:47.348498] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80
00:24:16.792  [2024-12-13 23:57:47.348837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:16.792  pt3
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:16.792   23:57:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:16.792    23:57:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:16.792    23:57:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:17.051   23:57:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:17.051    "name": "raid_bdev1",
00:24:17.051    "uuid": "60a18ee5-f7c7-40ea-9b59-9eca0aa36972",
00:24:17.051    "strip_size_kb": 64,
00:24:17.051    "state": "online",
00:24:17.051    "raid_level": "raid5f",
00:24:17.051    "superblock": true,
00:24:17.051    "num_base_bdevs": 4,
00:24:17.051    "num_base_bdevs_discovered": 3,
00:24:17.051    "num_base_bdevs_operational": 3,
00:24:17.051    "base_bdevs_list": [
00:24:17.051      {
00:24:17.051        "name": null,
00:24:17.051        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:17.051        "is_configured": false,
00:24:17.051        "data_offset": 2048,
00:24:17.051        "data_size": 63488
00:24:17.051      },
00:24:17.051      {
00:24:17.051        "name": "pt2",
00:24:17.051        "uuid": "fdedd021-c1c5-5a6f-9c72-bd224b3ab51c",
00:24:17.051        "is_configured": true,
00:24:17.051        "data_offset": 2048,
00:24:17.051        "data_size": 63488
00:24:17.051      },
00:24:17.051      {
00:24:17.051        "name": "pt3",
00:24:17.051        "uuid": "906d2baf-d1bb-5daa-9612-3f5ed1786c46",
00:24:17.051        "is_configured": true,
00:24:17.051        "data_offset": 2048,
00:24:17.051        "data_size": 63488
00:24:17.051      },
00:24:17.051      {
00:24:17.051        "name": "pt4",
00:24:17.051        "uuid": "e4aa569a-5abe-5532-9e54-169f01ff5312",
00:24:17.051        "is_configured": true,
00:24:17.051        "data_offset": 2048,
00:24:17.051        "data_size": 63488
00:24:17.051      }
00:24:17.051    ]
00:24:17.051  }'
00:24:17.051   23:57:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:17.051   23:57:47	-- common/autotest_common.sh@10 -- # set +x
00:24:17.618    23:57:48	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:17.618    23:57:48	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:24:17.618  [2024-12-13 23:57:48.317340] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:17.618   23:57:48	-- bdev/bdev_raid.sh@506 -- # '[' 60a18ee5-f7c7-40ea-9b59-9eca0aa36972 '!=' 60a18ee5-f7c7-40ea-9b59-9eca0aa36972 ']'
00:24:17.618   23:57:48	-- bdev/bdev_raid.sh@511 -- # killprocess 130193
00:24:17.618   23:57:48	-- common/autotest_common.sh@936 -- # '[' -z 130193 ']'
00:24:17.618   23:57:48	-- common/autotest_common.sh@940 -- # kill -0 130193
00:24:17.618    23:57:48	-- common/autotest_common.sh@941 -- # uname
00:24:17.618   23:57:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:17.618    23:57:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130193
00:24:17.877  killing process with pid 130193
00:24:17.877   23:57:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:24:17.877   23:57:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:24:17.877   23:57:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 130193'
00:24:17.877   23:57:48	-- common/autotest_common.sh@955 -- # kill 130193
00:24:17.877  [2024-12-13 23:57:48.356566] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:24:17.877   23:57:48	-- common/autotest_common.sh@960 -- # wait 130193
00:24:17.877  [2024-12-13 23:57:48.356624] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:17.877  [2024-12-13 23:57:48.356682] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:17.877  [2024-12-13 23:57:48.356692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline
00:24:17.877  [2024-12-13 23:57:48.609031] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:24:18.815  ************************************
00:24:18.815  END TEST raid5f_superblock_test
00:24:18.815  ************************************
00:24:18.815   23:57:49	-- bdev/bdev_raid.sh@513 -- # return 0
00:24:18.815  
00:24:18.815  real	0m20.626s
00:24:18.815  user	0m37.776s
00:24:18.815  sys	0m2.418s
00:24:18.815   23:57:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:18.815   23:57:49	-- common/autotest_common.sh@10 -- # set +x
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@747 -- # '[' true = true ']'
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false
00:24:19.074   23:57:49	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:24:19.074   23:57:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:19.074   23:57:49	-- common/autotest_common.sh@10 -- # set +x
00:24:19.074  ************************************
00:24:19.074  START TEST raid5f_rebuild_test
00:24:19.074  ************************************
00:24:19.074   23:57:49	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:19.074    23:57:49	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@544 -- # raid_pid=130851
00:24:19.074   23:57:49	-- bdev/bdev_raid.sh@545 -- # waitforlisten 130851 /var/tmp/spdk-raid.sock
00:24:19.075   23:57:49	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:24:19.075   23:57:49	-- common/autotest_common.sh@829 -- # '[' -z 130851 ']'
00:24:19.075   23:57:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:24:19.075   23:57:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:19.075   23:57:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:24:19.075  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:24:19.075   23:57:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:19.075   23:57:49	-- common/autotest_common.sh@10 -- # set +x
00:24:19.075  [2024-12-13 23:57:49.659851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:19.075  [2024-12-13 23:57:49.660215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130851 ]
00:24:19.075  I/O size of 3145728 is greater than zero copy threshold (65536).
00:24:19.075  Zero copy mechanism will not be used.
00:24:19.333  [2024-12-13 23:57:49.821745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:19.333  [2024-12-13 23:57:50.014762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:19.593  [2024-12-13 23:57:50.204709] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:24:20.189   23:57:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:20.189   23:57:50	-- common/autotest_common.sh@862 -- # return 0
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:24:20.189  BaseBdev1
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:24:20.189   23:57:50	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:24:20.448  BaseBdev2
00:24:20.448   23:57:51	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.448   23:57:51	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:24:20.448   23:57:51	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:24:20.707  BaseBdev3
00:24:20.707   23:57:51	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.707   23:57:51	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:24:20.707   23:57:51	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:24:20.966  BaseBdev4
00:24:20.966   23:57:51	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:24:21.225  spare_malloc
00:24:21.225   23:57:51	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:24:21.484  spare_delay
00:24:21.484   23:57:52	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:24:21.484  [2024-12-13 23:57:52.203081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:24:21.484  [2024-12-13 23:57:52.203398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:21.484  [2024-12-13 23:57:52.203484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780
00:24:21.484  [2024-12-13 23:57:52.203644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:21.484  [2024-12-13 23:57:52.205917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:21.484  [2024-12-13 23:57:52.206096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:24:21.484  spare
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:24:21.743  [2024-12-13 23:57:52.387155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:24:21.743  [2024-12-13 23:57:52.389188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:24:21.743  [2024-12-13 23:57:52.389366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:24:21.743  [2024-12-13 23:57:52.389442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:24:21.743  [2024-12-13 23:57:52.389633] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80
00:24:21.743  [2024-12-13 23:57:52.389679] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:24:21.743  [2024-12-13 23:57:52.389938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930
00:24:21.743  [2024-12-13 23:57:52.395530] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80
00:24:21.743  [2024-12-13 23:57:52.395660] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80
00:24:21.743  [2024-12-13 23:57:52.395947] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:21.743   23:57:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:21.743    23:57:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:21.743    23:57:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:22.002   23:57:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:22.002    "name": "raid_bdev1",
00:24:22.002    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:22.002    "strip_size_kb": 64,
00:24:22.002    "state": "online",
00:24:22.002    "raid_level": "raid5f",
00:24:22.003    "superblock": false,
00:24:22.003    "num_base_bdevs": 4,
00:24:22.003    "num_base_bdevs_discovered": 4,
00:24:22.003    "num_base_bdevs_operational": 4,
00:24:22.003    "base_bdevs_list": [
00:24:22.003      {
00:24:22.003        "name": "BaseBdev1",
00:24:22.003        "uuid": "178bda23-99c3-4d81-a182-b43945b05992",
00:24:22.003        "is_configured": true,
00:24:22.003        "data_offset": 0,
00:24:22.003        "data_size": 65536
00:24:22.003      },
00:24:22.003      {
00:24:22.003        "name": "BaseBdev2",
00:24:22.003        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:22.003        "is_configured": true,
00:24:22.003        "data_offset": 0,
00:24:22.003        "data_size": 65536
00:24:22.003      },
00:24:22.003      {
00:24:22.003        "name": "BaseBdev3",
00:24:22.003        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:22.003        "is_configured": true,
00:24:22.003        "data_offset": 0,
00:24:22.003        "data_size": 65536
00:24:22.003      },
00:24:22.003      {
00:24:22.003        "name": "BaseBdev4",
00:24:22.003        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:22.003        "is_configured": true,
00:24:22.003        "data_offset": 0,
00:24:22.003        "data_size": 65536
00:24:22.003      }
00:24:22.003    ]
00:24:22.003  }'
00:24:22.003   23:57:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:22.003   23:57:52	-- common/autotest_common.sh@10 -- # set +x
00:24:22.570    23:57:53	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:22.570    23:57:53	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:24:22.829  [2024-12-13 23:57:53.414524] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:22.829   23:57:53	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608
00:24:22.829    23:57:53	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:22.829    23:57:53	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:24:23.088   23:57:53	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:24:23.088   23:57:53	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:24:23.088   23:57:53	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:24:23.088   23:57:53	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@12 -- # local i
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:24:23.088  [2024-12-13 23:57:53.774436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0
00:24:23.088  /dev/nbd0
00:24:23.088    23:57:53	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:23.088   23:57:53	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:23.088   23:57:53	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:23.088   23:57:53	-- common/autotest_common.sh@867 -- # local i
00:24:23.088   23:57:53	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:23.088   23:57:53	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:23.088   23:57:53	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:23.088   23:57:53	-- common/autotest_common.sh@871 -- # break
00:24:23.088   23:57:53	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:23.088   23:57:53	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:23.088   23:57:53	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:23.347  1+0 records in
00:24:23.347  1+0 records out
00:24:23.347  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706097 s, 5.8 MB/s
00:24:23.347    23:57:53	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:23.347   23:57:53	-- common/autotest_common.sh@884 -- # size=4096
00:24:23.347   23:57:53	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:23.347   23:57:53	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:23.347   23:57:53	-- common/autotest_common.sh@887 -- # return 0
00:24:23.347   23:57:53	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:23.347   23:57:53	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:23.347   23:57:53	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:24:23.347   23:57:53	-- bdev/bdev_raid.sh@581 -- # write_unit_size=384
00:24:23.347   23:57:53	-- bdev/bdev_raid.sh@582 -- # echo 192
00:24:23.347   23:57:53	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct
00:24:23.606  512+0 records in
00:24:23.606  512+0 records out
00:24:23.606  100663296 bytes (101 MB, 96 MiB) copied, 0.496935 s, 203 MB/s
00:24:23.606   23:57:54	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@51 -- # local i
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:23.606   23:57:54	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:23.864    23:57:54	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:23.864  [2024-12-13 23:57:54.588726] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@41 -- # break
00:24:23.864   23:57:54	-- bdev/nbd_common.sh@45 -- # return 0
00:24:23.864   23:57:54	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:24:24.123  [2024-12-13 23:57:54.768371] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:24.123   23:57:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:24.123    23:57:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:24.123    23:57:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:24.382   23:57:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:24.382    "name": "raid_bdev1",
00:24:24.382    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:24.382    "strip_size_kb": 64,
00:24:24.382    "state": "online",
00:24:24.382    "raid_level": "raid5f",
00:24:24.382    "superblock": false,
00:24:24.382    "num_base_bdevs": 4,
00:24:24.382    "num_base_bdevs_discovered": 3,
00:24:24.382    "num_base_bdevs_operational": 3,
00:24:24.382    "base_bdevs_list": [
00:24:24.382      {
00:24:24.382        "name": null,
00:24:24.382        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:24.382        "is_configured": false,
00:24:24.382        "data_offset": 0,
00:24:24.382        "data_size": 65536
00:24:24.382      },
00:24:24.382      {
00:24:24.382        "name": "BaseBdev2",
00:24:24.382        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:24.382        "is_configured": true,
00:24:24.382        "data_offset": 0,
00:24:24.382        "data_size": 65536
00:24:24.382      },
00:24:24.382      {
00:24:24.382        "name": "BaseBdev3",
00:24:24.382        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:24.382        "is_configured": true,
00:24:24.382        "data_offset": 0,
00:24:24.382        "data_size": 65536
00:24:24.382      },
00:24:24.382      {
00:24:24.382        "name": "BaseBdev4",
00:24:24.382        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:24.382        "is_configured": true,
00:24:24.382        "data_offset": 0,
00:24:24.382        "data_size": 65536
00:24:24.382      }
00:24:24.382    ]
00:24:24.382  }'
00:24:24.382   23:57:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:24.382   23:57:55	-- common/autotest_common.sh@10 -- # set +x
00:24:24.949   23:57:55	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:25.208  [2024-12-13 23:57:55.836516] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:25.208  [2024-12-13 23:57:55.836696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:25.208  [2024-12-13 23:57:55.847375] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0
00:24:25.208  [2024-12-13 23:57:55.854510] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:25.208   23:57:55	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:24:26.144   23:57:56	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:26.144   23:57:56	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:26.144   23:57:56	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:26.144   23:57:56	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:26.144   23:57:56	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:26.144    23:57:56	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:26.144    23:57:56	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:26.402   23:57:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:26.402    "name": "raid_bdev1",
00:24:26.402    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:26.402    "strip_size_kb": 64,
00:24:26.402    "state": "online",
00:24:26.402    "raid_level": "raid5f",
00:24:26.402    "superblock": false,
00:24:26.402    "num_base_bdevs": 4,
00:24:26.402    "num_base_bdevs_discovered": 4,
00:24:26.402    "num_base_bdevs_operational": 4,
00:24:26.402    "process": {
00:24:26.402      "type": "rebuild",
00:24:26.402      "target": "spare",
00:24:26.402      "progress": {
00:24:26.402        "blocks": 23040,
00:24:26.402        "percent": 11
00:24:26.402      }
00:24:26.402    },
00:24:26.402    "base_bdevs_list": [
00:24:26.402      {
00:24:26.402        "name": "spare",
00:24:26.402        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:26.402        "is_configured": true,
00:24:26.402        "data_offset": 0,
00:24:26.402        "data_size": 65536
00:24:26.402      },
00:24:26.402      {
00:24:26.402        "name": "BaseBdev2",
00:24:26.402        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:26.402        "is_configured": true,
00:24:26.402        "data_offset": 0,
00:24:26.402        "data_size": 65536
00:24:26.402      },
00:24:26.402      {
00:24:26.402        "name": "BaseBdev3",
00:24:26.402        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:26.402        "is_configured": true,
00:24:26.402        "data_offset": 0,
00:24:26.402        "data_size": 65536
00:24:26.402      },
00:24:26.402      {
00:24:26.402        "name": "BaseBdev4",
00:24:26.402        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:26.402        "is_configured": true,
00:24:26.402        "data_offset": 0,
00:24:26.402        "data_size": 65536
00:24:26.402      }
00:24:26.402    ]
00:24:26.402  }'
00:24:26.402    23:57:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:26.661   23:57:57	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:26.661    23:57:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:26.661   23:57:57	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:26.661   23:57:57	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:24:26.919  [2024-12-13 23:57:57.440296] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:24:26.919  [2024-12-13 23:57:57.465754] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:24:26.919  [2024-12-13 23:57:57.465985] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:26.919   23:57:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:26.919    23:57:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:26.919    23:57:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:27.178   23:57:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:27.178    "name": "raid_bdev1",
00:24:27.178    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:27.178    "strip_size_kb": 64,
00:24:27.178    "state": "online",
00:24:27.178    "raid_level": "raid5f",
00:24:27.178    "superblock": false,
00:24:27.178    "num_base_bdevs": 4,
00:24:27.178    "num_base_bdevs_discovered": 3,
00:24:27.178    "num_base_bdevs_operational": 3,
00:24:27.178    "base_bdevs_list": [
00:24:27.178      {
00:24:27.178        "name": null,
00:24:27.178        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:27.178        "is_configured": false,
00:24:27.178        "data_offset": 0,
00:24:27.178        "data_size": 65536
00:24:27.178      },
00:24:27.178      {
00:24:27.178        "name": "BaseBdev2",
00:24:27.178        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:27.178        "is_configured": true,
00:24:27.178        "data_offset": 0,
00:24:27.178        "data_size": 65536
00:24:27.178      },
00:24:27.178      {
00:24:27.178        "name": "BaseBdev3",
00:24:27.178        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:27.178        "is_configured": true,
00:24:27.178        "data_offset": 0,
00:24:27.178        "data_size": 65536
00:24:27.178      },
00:24:27.178      {
00:24:27.178        "name": "BaseBdev4",
00:24:27.178        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:27.178        "is_configured": true,
00:24:27.178        "data_offset": 0,
00:24:27.178        "data_size": 65536
00:24:27.178      }
00:24:27.178    ]
00:24:27.178  }'
00:24:27.178   23:57:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:27.178   23:57:57	-- common/autotest_common.sh@10 -- # set +x
00:24:27.745   23:57:58	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:27.745   23:57:58	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:27.745   23:57:58	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:27.745   23:57:58	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:27.745   23:57:58	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:27.745    23:57:58	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:27.745    23:57:58	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:28.003   23:57:58	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:28.003    "name": "raid_bdev1",
00:24:28.003    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:28.003    "strip_size_kb": 64,
00:24:28.003    "state": "online",
00:24:28.003    "raid_level": "raid5f",
00:24:28.003    "superblock": false,
00:24:28.003    "num_base_bdevs": 4,
00:24:28.003    "num_base_bdevs_discovered": 3,
00:24:28.003    "num_base_bdevs_operational": 3,
00:24:28.003    "base_bdevs_list": [
00:24:28.003      {
00:24:28.003        "name": null,
00:24:28.003        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:28.003        "is_configured": false,
00:24:28.003        "data_offset": 0,
00:24:28.003        "data_size": 65536
00:24:28.003      },
00:24:28.003      {
00:24:28.003        "name": "BaseBdev2",
00:24:28.003        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:28.003        "is_configured": true,
00:24:28.003        "data_offset": 0,
00:24:28.003        "data_size": 65536
00:24:28.003      },
00:24:28.003      {
00:24:28.003        "name": "BaseBdev3",
00:24:28.003        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:28.003        "is_configured": true,
00:24:28.003        "data_offset": 0,
00:24:28.003        "data_size": 65536
00:24:28.003      },
00:24:28.003      {
00:24:28.003        "name": "BaseBdev4",
00:24:28.003        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:28.003        "is_configured": true,
00:24:28.003        "data_offset": 0,
00:24:28.003        "data_size": 65536
00:24:28.003      }
00:24:28.003    ]
00:24:28.003  }'
00:24:28.003    23:57:58	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:28.003   23:57:58	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:28.003    23:57:58	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:28.003   23:57:58	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:28.003   23:57:58	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:28.261  [2024-12-13 23:57:58.821198] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:28.261  [2024-12-13 23:57:58.821360] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:28.261  [2024-12-13 23:57:58.830787] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270
00:24:28.261  [2024-12-13 23:57:58.837840] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:28.261   23:57:58	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:24:29.196   23:57:59	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:29.196   23:57:59	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:29.196   23:57:59	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:29.196   23:57:59	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:29.196   23:57:59	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:29.196    23:57:59	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:29.196    23:57:59	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:29.454    "name": "raid_bdev1",
00:24:29.454    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:29.454    "strip_size_kb": 64,
00:24:29.454    "state": "online",
00:24:29.454    "raid_level": "raid5f",
00:24:29.454    "superblock": false,
00:24:29.454    "num_base_bdevs": 4,
00:24:29.454    "num_base_bdevs_discovered": 4,
00:24:29.454    "num_base_bdevs_operational": 4,
00:24:29.454    "process": {
00:24:29.454      "type": "rebuild",
00:24:29.454      "target": "spare",
00:24:29.454      "progress": {
00:24:29.454        "blocks": 21120,
00:24:29.454        "percent": 10
00:24:29.454      }
00:24:29.454    },
00:24:29.454    "base_bdevs_list": [
00:24:29.454      {
00:24:29.454        "name": "spare",
00:24:29.454        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:29.454        "is_configured": true,
00:24:29.454        "data_offset": 0,
00:24:29.454        "data_size": 65536
00:24:29.454      },
00:24:29.454      {
00:24:29.454        "name": "BaseBdev2",
00:24:29.454        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:29.454        "is_configured": true,
00:24:29.454        "data_offset": 0,
00:24:29.454        "data_size": 65536
00:24:29.454      },
00:24:29.454      {
00:24:29.454        "name": "BaseBdev3",
00:24:29.454        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:29.454        "is_configured": true,
00:24:29.454        "data_offset": 0,
00:24:29.454        "data_size": 65536
00:24:29.454      },
00:24:29.454      {
00:24:29.454        "name": "BaseBdev4",
00:24:29.454        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:29.454        "is_configured": true,
00:24:29.454        "data_offset": 0,
00:24:29.454        "data_size": 65536
00:24:29.454      }
00:24:29.454    ]
00:24:29.454  }'
00:24:29.454    23:58:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:29.454    23:58:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@657 -- # local timeout=691
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:29.454   23:58:00	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:29.454    23:58:00	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:29.454    23:58:00	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:29.712   23:58:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:29.712    "name": "raid_bdev1",
00:24:29.712    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:29.712    "strip_size_kb": 64,
00:24:29.712    "state": "online",
00:24:29.712    "raid_level": "raid5f",
00:24:29.712    "superblock": false,
00:24:29.712    "num_base_bdevs": 4,
00:24:29.712    "num_base_bdevs_discovered": 4,
00:24:29.712    "num_base_bdevs_operational": 4,
00:24:29.712    "process": {
00:24:29.712      "type": "rebuild",
00:24:29.712      "target": "spare",
00:24:29.712      "progress": {
00:24:29.712        "blocks": 26880,
00:24:29.712        "percent": 13
00:24:29.712      }
00:24:29.712    },
00:24:29.712    "base_bdevs_list": [
00:24:29.712      {
00:24:29.712        "name": "spare",
00:24:29.712        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:29.712        "is_configured": true,
00:24:29.712        "data_offset": 0,
00:24:29.712        "data_size": 65536
00:24:29.712      },
00:24:29.712      {
00:24:29.712        "name": "BaseBdev2",
00:24:29.712        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:29.712        "is_configured": true,
00:24:29.712        "data_offset": 0,
00:24:29.712        "data_size": 65536
00:24:29.712      },
00:24:29.712      {
00:24:29.712        "name": "BaseBdev3",
00:24:29.712        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:29.712        "is_configured": true,
00:24:29.712        "data_offset": 0,
00:24:29.712        "data_size": 65536
00:24:29.712      },
00:24:29.712      {
00:24:29.712        "name": "BaseBdev4",
00:24:29.712        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:29.712        "is_configured": true,
00:24:29.712        "data_offset": 0,
00:24:29.712        "data_size": 65536
00:24:29.712      }
00:24:29.712    ]
00:24:29.712  }'
00:24:29.712    23:58:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:29.712   23:58:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:29.713    23:58:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:29.713   23:58:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:29.713   23:58:00	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:31.088    23:58:01	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:31.088    23:58:01	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:31.088    "name": "raid_bdev1",
00:24:31.088    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:31.088    "strip_size_kb": 64,
00:24:31.088    "state": "online",
00:24:31.088    "raid_level": "raid5f",
00:24:31.088    "superblock": false,
00:24:31.088    "num_base_bdevs": 4,
00:24:31.088    "num_base_bdevs_discovered": 4,
00:24:31.088    "num_base_bdevs_operational": 4,
00:24:31.088    "process": {
00:24:31.088      "type": "rebuild",
00:24:31.088      "target": "spare",
00:24:31.088      "progress": {
00:24:31.088        "blocks": 51840,
00:24:31.088        "percent": 26
00:24:31.088      }
00:24:31.088    },
00:24:31.088    "base_bdevs_list": [
00:24:31.088      {
00:24:31.088        "name": "spare",
00:24:31.088        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:31.088        "is_configured": true,
00:24:31.088        "data_offset": 0,
00:24:31.088        "data_size": 65536
00:24:31.088      },
00:24:31.088      {
00:24:31.088        "name": "BaseBdev2",
00:24:31.088        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:31.088        "is_configured": true,
00:24:31.088        "data_offset": 0,
00:24:31.088        "data_size": 65536
00:24:31.088      },
00:24:31.088      {
00:24:31.088        "name": "BaseBdev3",
00:24:31.088        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:31.088        "is_configured": true,
00:24:31.088        "data_offset": 0,
00:24:31.088        "data_size": 65536
00:24:31.088      },
00:24:31.088      {
00:24:31.088        "name": "BaseBdev4",
00:24:31.088        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:31.088        "is_configured": true,
00:24:31.088        "data_offset": 0,
00:24:31.088        "data_size": 65536
00:24:31.088      }
00:24:31.088    ]
00:24:31.088  }'
00:24:31.088    23:58:01	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:31.088    23:58:01	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:31.088   23:58:01	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:32.463   23:58:02	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:32.464    23:58:02	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:32.464    23:58:02	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:32.464   23:58:02	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:32.464    "name": "raid_bdev1",
00:24:32.464    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:32.464    "strip_size_kb": 64,
00:24:32.464    "state": "online",
00:24:32.464    "raid_level": "raid5f",
00:24:32.464    "superblock": false,
00:24:32.464    "num_base_bdevs": 4,
00:24:32.464    "num_base_bdevs_discovered": 4,
00:24:32.464    "num_base_bdevs_operational": 4,
00:24:32.464    "process": {
00:24:32.464      "type": "rebuild",
00:24:32.464      "target": "spare",
00:24:32.464      "progress": {
00:24:32.464        "blocks": 76800,
00:24:32.464        "percent": 39
00:24:32.464      }
00:24:32.464    },
00:24:32.464    "base_bdevs_list": [
00:24:32.464      {
00:24:32.464        "name": "spare",
00:24:32.464        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:32.464        "is_configured": true,
00:24:32.464        "data_offset": 0,
00:24:32.464        "data_size": 65536
00:24:32.464      },
00:24:32.464      {
00:24:32.464        "name": "BaseBdev2",
00:24:32.464        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:32.464        "is_configured": true,
00:24:32.464        "data_offset": 0,
00:24:32.464        "data_size": 65536
00:24:32.464      },
00:24:32.464      {
00:24:32.464        "name": "BaseBdev3",
00:24:32.464        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:32.464        "is_configured": true,
00:24:32.464        "data_offset": 0,
00:24:32.464        "data_size": 65536
00:24:32.464      },
00:24:32.464      {
00:24:32.464        "name": "BaseBdev4",
00:24:32.464        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:32.464        "is_configured": true,
00:24:32.464        "data_offset": 0,
00:24:32.464        "data_size": 65536
00:24:32.464      }
00:24:32.464    ]
00:24:32.464  }'
00:24:32.464    23:58:02	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:32.464   23:58:03	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:32.464    23:58:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:32.464   23:58:03	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:32.464   23:58:03	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:33.398   23:58:04	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:33.398    23:58:04	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:33.398    23:58:04	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:33.656   23:58:04	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:33.656    "name": "raid_bdev1",
00:24:33.656    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:33.656    "strip_size_kb": 64,
00:24:33.656    "state": "online",
00:24:33.656    "raid_level": "raid5f",
00:24:33.656    "superblock": false,
00:24:33.656    "num_base_bdevs": 4,
00:24:33.656    "num_base_bdevs_discovered": 4,
00:24:33.656    "num_base_bdevs_operational": 4,
00:24:33.656    "process": {
00:24:33.656      "type": "rebuild",
00:24:33.656      "target": "spare",
00:24:33.656      "progress": {
00:24:33.656        "blocks": 103680,
00:24:33.656        "percent": 52
00:24:33.656      }
00:24:33.656    },
00:24:33.656    "base_bdevs_list": [
00:24:33.656      {
00:24:33.656        "name": "spare",
00:24:33.656        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:33.656        "is_configured": true,
00:24:33.656        "data_offset": 0,
00:24:33.656        "data_size": 65536
00:24:33.656      },
00:24:33.656      {
00:24:33.656        "name": "BaseBdev2",
00:24:33.656        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:33.656        "is_configured": true,
00:24:33.656        "data_offset": 0,
00:24:33.656        "data_size": 65536
00:24:33.656      },
00:24:33.656      {
00:24:33.656        "name": "BaseBdev3",
00:24:33.656        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:33.656        "is_configured": true,
00:24:33.656        "data_offset": 0,
00:24:33.656        "data_size": 65536
00:24:33.656      },
00:24:33.656      {
00:24:33.656        "name": "BaseBdev4",
00:24:33.656        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:33.656        "is_configured": true,
00:24:33.656        "data_offset": 0,
00:24:33.656        "data_size": 65536
00:24:33.656      }
00:24:33.656    ]
00:24:33.656  }'
00:24:33.656    23:58:04	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:33.656   23:58:04	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:33.656    23:58:04	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:33.914   23:58:04	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:33.914   23:58:04	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:34.849   23:58:05	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:34.849    23:58:05	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:34.849    23:58:05	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:35.107   23:58:05	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:35.107    "name": "raid_bdev1",
00:24:35.107    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:35.107    "strip_size_kb": 64,
00:24:35.107    "state": "online",
00:24:35.107    "raid_level": "raid5f",
00:24:35.107    "superblock": false,
00:24:35.107    "num_base_bdevs": 4,
00:24:35.107    "num_base_bdevs_discovered": 4,
00:24:35.107    "num_base_bdevs_operational": 4,
00:24:35.107    "process": {
00:24:35.107      "type": "rebuild",
00:24:35.107      "target": "spare",
00:24:35.107      "progress": {
00:24:35.107        "blocks": 128640,
00:24:35.107        "percent": 65
00:24:35.107      }
00:24:35.107    },
00:24:35.107    "base_bdevs_list": [
00:24:35.107      {
00:24:35.107        "name": "spare",
00:24:35.107        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:35.108        "is_configured": true,
00:24:35.108        "data_offset": 0,
00:24:35.108        "data_size": 65536
00:24:35.108      },
00:24:35.108      {
00:24:35.108        "name": "BaseBdev2",
00:24:35.108        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:35.108        "is_configured": true,
00:24:35.108        "data_offset": 0,
00:24:35.108        "data_size": 65536
00:24:35.108      },
00:24:35.108      {
00:24:35.108        "name": "BaseBdev3",
00:24:35.108        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:35.108        "is_configured": true,
00:24:35.108        "data_offset": 0,
00:24:35.108        "data_size": 65536
00:24:35.108      },
00:24:35.108      {
00:24:35.108        "name": "BaseBdev4",
00:24:35.108        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:35.108        "is_configured": true,
00:24:35.108        "data_offset": 0,
00:24:35.108        "data_size": 65536
00:24:35.108      }
00:24:35.108    ]
00:24:35.108  }'
00:24:35.108    23:58:05	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:35.108   23:58:05	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:35.108    23:58:05	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:35.108   23:58:05	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:35.108   23:58:05	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:36.043   23:58:06	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:36.043    23:58:06	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:36.043    23:58:06	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:36.301   23:58:07	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:36.301    "name": "raid_bdev1",
00:24:36.301    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:36.301    "strip_size_kb": 64,
00:24:36.301    "state": "online",
00:24:36.301    "raid_level": "raid5f",
00:24:36.301    "superblock": false,
00:24:36.301    "num_base_bdevs": 4,
00:24:36.301    "num_base_bdevs_discovered": 4,
00:24:36.301    "num_base_bdevs_operational": 4,
00:24:36.301    "process": {
00:24:36.301      "type": "rebuild",
00:24:36.301      "target": "spare",
00:24:36.301      "progress": {
00:24:36.301        "blocks": 155520,
00:24:36.301        "percent": 79
00:24:36.301      }
00:24:36.301    },
00:24:36.301    "base_bdevs_list": [
00:24:36.301      {
00:24:36.301        "name": "spare",
00:24:36.301        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:36.301        "is_configured": true,
00:24:36.301        "data_offset": 0,
00:24:36.301        "data_size": 65536
00:24:36.301      },
00:24:36.301      {
00:24:36.301        "name": "BaseBdev2",
00:24:36.301        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:36.301        "is_configured": true,
00:24:36.301        "data_offset": 0,
00:24:36.301        "data_size": 65536
00:24:36.301      },
00:24:36.301      {
00:24:36.301        "name": "BaseBdev3",
00:24:36.301        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:36.301        "is_configured": true,
00:24:36.301        "data_offset": 0,
00:24:36.301        "data_size": 65536
00:24:36.301      },
00:24:36.301      {
00:24:36.301        "name": "BaseBdev4",
00:24:36.301        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:36.301        "is_configured": true,
00:24:36.301        "data_offset": 0,
00:24:36.301        "data_size": 65536
00:24:36.301      }
00:24:36.301    ]
00:24:36.301  }'
00:24:36.301    23:58:07	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:36.559   23:58:07	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:36.559    23:58:07	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:36.559   23:58:07	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:36.559   23:58:07	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:37.494   23:58:08	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:37.494    23:58:08	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:37.494    23:58:08	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:37.754   23:58:08	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:37.754    "name": "raid_bdev1",
00:24:37.754    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:37.754    "strip_size_kb": 64,
00:24:37.754    "state": "online",
00:24:37.754    "raid_level": "raid5f",
00:24:37.754    "superblock": false,
00:24:37.754    "num_base_bdevs": 4,
00:24:37.754    "num_base_bdevs_discovered": 4,
00:24:37.754    "num_base_bdevs_operational": 4,
00:24:37.754    "process": {
00:24:37.754      "type": "rebuild",
00:24:37.754      "target": "spare",
00:24:37.754      "progress": {
00:24:37.754        "blocks": 180480,
00:24:37.754        "percent": 91
00:24:37.754      }
00:24:37.754    },
00:24:37.754    "base_bdevs_list": [
00:24:37.754      {
00:24:37.754        "name": "spare",
00:24:37.754        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:37.754        "is_configured": true,
00:24:37.754        "data_offset": 0,
00:24:37.754        "data_size": 65536
00:24:37.754      },
00:24:37.754      {
00:24:37.754        "name": "BaseBdev2",
00:24:37.754        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:37.754        "is_configured": true,
00:24:37.754        "data_offset": 0,
00:24:37.754        "data_size": 65536
00:24:37.754      },
00:24:37.754      {
00:24:37.754        "name": "BaseBdev3",
00:24:37.754        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:37.754        "is_configured": true,
00:24:37.754        "data_offset": 0,
00:24:37.754        "data_size": 65536
00:24:37.754      },
00:24:37.754      {
00:24:37.754        "name": "BaseBdev4",
00:24:37.754        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:37.754        "is_configured": true,
00:24:37.754        "data_offset": 0,
00:24:37.754        "data_size": 65536
00:24:37.754      }
00:24:37.754    ]
00:24:37.754  }'
00:24:37.754    23:58:08	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:37.754   23:58:08	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:37.754    23:58:08	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:37.754   23:58:08	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:37.754   23:58:08	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:38.690  [2024-12-13 23:58:09.207189] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:24:38.690  [2024-12-13 23:58:09.207404] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:24:38.690  [2024-12-13 23:58:09.207612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:38.949   23:58:09	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:38.949    23:58:09	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:38.949    23:58:09	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:39.208    "name": "raid_bdev1",
00:24:39.208    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:39.208    "strip_size_kb": 64,
00:24:39.208    "state": "online",
00:24:39.208    "raid_level": "raid5f",
00:24:39.208    "superblock": false,
00:24:39.208    "num_base_bdevs": 4,
00:24:39.208    "num_base_bdevs_discovered": 4,
00:24:39.208    "num_base_bdevs_operational": 4,
00:24:39.208    "base_bdevs_list": [
00:24:39.208      {
00:24:39.208        "name": "spare",
00:24:39.208        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:39.208        "is_configured": true,
00:24:39.208        "data_offset": 0,
00:24:39.208        "data_size": 65536
00:24:39.208      },
00:24:39.208      {
00:24:39.208        "name": "BaseBdev2",
00:24:39.208        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:39.208        "is_configured": true,
00:24:39.208        "data_offset": 0,
00:24:39.208        "data_size": 65536
00:24:39.208      },
00:24:39.208      {
00:24:39.208        "name": "BaseBdev3",
00:24:39.208        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:39.208        "is_configured": true,
00:24:39.208        "data_offset": 0,
00:24:39.208        "data_size": 65536
00:24:39.208      },
00:24:39.208      {
00:24:39.208        "name": "BaseBdev4",
00:24:39.208        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:39.208        "is_configured": true,
00:24:39.208        "data_offset": 0,
00:24:39.208        "data_size": 65536
00:24:39.208      }
00:24:39.208    ]
00:24:39.208  }'
00:24:39.208    23:58:09	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:24:39.208    23:58:09	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@660 -- # break
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:39.208   23:58:09	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:39.208    23:58:09	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:39.208    23:58:09	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:39.467    "name": "raid_bdev1",
00:24:39.467    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:39.467    "strip_size_kb": 64,
00:24:39.467    "state": "online",
00:24:39.467    "raid_level": "raid5f",
00:24:39.467    "superblock": false,
00:24:39.467    "num_base_bdevs": 4,
00:24:39.467    "num_base_bdevs_discovered": 4,
00:24:39.467    "num_base_bdevs_operational": 4,
00:24:39.467    "base_bdevs_list": [
00:24:39.467      {
00:24:39.467        "name": "spare",
00:24:39.467        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:39.467        "is_configured": true,
00:24:39.467        "data_offset": 0,
00:24:39.467        "data_size": 65536
00:24:39.467      },
00:24:39.467      {
00:24:39.467        "name": "BaseBdev2",
00:24:39.467        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:39.467        "is_configured": true,
00:24:39.467        "data_offset": 0,
00:24:39.467        "data_size": 65536
00:24:39.467      },
00:24:39.467      {
00:24:39.467        "name": "BaseBdev3",
00:24:39.467        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:39.467        "is_configured": true,
00:24:39.467        "data_offset": 0,
00:24:39.467        "data_size": 65536
00:24:39.467      },
00:24:39.467      {
00:24:39.467        "name": "BaseBdev4",
00:24:39.467        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:39.467        "is_configured": true,
00:24:39.467        "data_offset": 0,
00:24:39.467        "data_size": 65536
00:24:39.467      }
00:24:39.467    ]
00:24:39.467  }'
00:24:39.467    23:58:10	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:39.467    23:58:10	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:39.467   23:58:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:39.467    23:58:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:39.467    23:58:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:39.726   23:58:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:39.726    "name": "raid_bdev1",
00:24:39.726    "uuid": "cdf7d539-35a0-43ba-830e-062304c268fa",
00:24:39.726    "strip_size_kb": 64,
00:24:39.726    "state": "online",
00:24:39.726    "raid_level": "raid5f",
00:24:39.726    "superblock": false,
00:24:39.726    "num_base_bdevs": 4,
00:24:39.726    "num_base_bdevs_discovered": 4,
00:24:39.726    "num_base_bdevs_operational": 4,
00:24:39.726    "base_bdevs_list": [
00:24:39.726      {
00:24:39.726        "name": "spare",
00:24:39.726        "uuid": "c9f4cd46-4f8e-5cf2-92d0-fc6d2f73f84f",
00:24:39.726        "is_configured": true,
00:24:39.726        "data_offset": 0,
00:24:39.726        "data_size": 65536
00:24:39.726      },
00:24:39.726      {
00:24:39.726        "name": "BaseBdev2",
00:24:39.726        "uuid": "0618a36c-e8be-45ab-9638-5280ea7503f4",
00:24:39.726        "is_configured": true,
00:24:39.726        "data_offset": 0,
00:24:39.726        "data_size": 65536
00:24:39.726      },
00:24:39.726      {
00:24:39.726        "name": "BaseBdev3",
00:24:39.726        "uuid": "92a4b02a-b381-4e50-935e-fac891307510",
00:24:39.726        "is_configured": true,
00:24:39.726        "data_offset": 0,
00:24:39.726        "data_size": 65536
00:24:39.726      },
00:24:39.726      {
00:24:39.726        "name": "BaseBdev4",
00:24:39.726        "uuid": "58005421-6f64-48f1-8ae2-b9001dc04abe",
00:24:39.726        "is_configured": true,
00:24:39.726        "data_offset": 0,
00:24:39.726        "data_size": 65536
00:24:39.726      }
00:24:39.726    ]
00:24:39.726  }'
00:24:39.726   23:58:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:39.726   23:58:10	-- common/autotest_common.sh@10 -- # set +x
00:24:40.321   23:58:10	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:40.579  [2024-12-13 23:58:11.107068] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:40.579  [2024-12-13 23:58:11.107217] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:40.580  [2024-12-13 23:58:11.107390] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:40.580  [2024-12-13 23:58:11.107604] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:40.580  [2024-12-13 23:58:11.107716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline
00:24:40.580    23:58:11	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:40.580    23:58:11	-- bdev/bdev_raid.sh@671 -- # jq length
00:24:40.838   23:58:11	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:24:40.838   23:58:11	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:24:40.838   23:58:11	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@12 -- # local i
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:40.838   23:58:11	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:24:41.096  /dev/nbd0
00:24:41.096    23:58:11	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:41.096   23:58:11	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:41.096   23:58:11	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:41.096   23:58:11	-- common/autotest_common.sh@867 -- # local i
00:24:41.096   23:58:11	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:41.096   23:58:11	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:41.096   23:58:11	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:41.096   23:58:11	-- common/autotest_common.sh@871 -- # break
00:24:41.096   23:58:11	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:41.096   23:58:11	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:41.096   23:58:11	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:41.096  1+0 records in
00:24:41.096  1+0 records out
00:24:41.096  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499429 s, 8.2 MB/s
00:24:41.096    23:58:11	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:41.096   23:58:11	-- common/autotest_common.sh@884 -- # size=4096
00:24:41.096   23:58:11	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:41.096   23:58:11	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:41.096   23:58:11	-- common/autotest_common.sh@887 -- # return 0
00:24:41.096   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:41.096   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:41.096   23:58:11	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:24:41.355  /dev/nbd1
00:24:41.355    23:58:11	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:24:41.355   23:58:11	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:24:41.355   23:58:11	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:24:41.355   23:58:11	-- common/autotest_common.sh@867 -- # local i
00:24:41.355   23:58:11	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:41.355   23:58:11	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:41.355   23:58:11	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:24:41.355   23:58:11	-- common/autotest_common.sh@871 -- # break
00:24:41.355   23:58:11	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:41.355   23:58:11	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:41.355   23:58:11	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:41.355  1+0 records in
00:24:41.355  1+0 records out
00:24:41.355  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605081 s, 6.8 MB/s
00:24:41.355    23:58:11	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:41.355   23:58:11	-- common/autotest_common.sh@884 -- # size=4096
00:24:41.355   23:58:11	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:41.355   23:58:11	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:41.355   23:58:11	-- common/autotest_common.sh@887 -- # return 0
00:24:41.355   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:41.355   23:58:11	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:41.355   23:58:11	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:24:41.615   23:58:12	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@51 -- # local i
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:41.615    23:58:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@41 -- # break
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@45 -- # return 0
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:41.615   23:58:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:24:41.874    23:58:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@41 -- # break
00:24:41.874   23:58:12	-- bdev/nbd_common.sh@45 -- # return 0
00:24:41.874   23:58:12	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:24:41.874   23:58:12	-- bdev/bdev_raid.sh@709 -- # killprocess 130851
00:24:41.874   23:58:12	-- common/autotest_common.sh@936 -- # '[' -z 130851 ']'
00:24:41.874   23:58:12	-- common/autotest_common.sh@940 -- # kill -0 130851
00:24:41.874    23:58:12	-- common/autotest_common.sh@941 -- # uname
00:24:41.874   23:58:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:41.874    23:58:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130851
00:24:41.874   23:58:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:24:41.874   23:58:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:24:41.874  killing process with pid 130851
00:24:41.874   23:58:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 130851'
00:24:41.874  Received shutdown signal, test time was about 60.000000 seconds
00:24:41.874  
00:24:41.874                                                                                                  Latency(us)
00:24:41.874  
[2024-12-13T23:58:12.606Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:41.874  
[2024-12-13T23:58:12.606Z]  ===================================================================================================================
00:24:41.874  
[2024-12-13T23:58:12.606Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:24:41.874   23:58:12	-- common/autotest_common.sh@955 -- # kill 130851
00:24:41.874   23:58:12	-- common/autotest_common.sh@960 -- # wait 130851
00:24:41.874  [2024-12-13 23:58:12.581426] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:24:42.441  [2024-12-13 23:58:12.914621] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:24:43.377  ************************************
00:24:43.377  END TEST raid5f_rebuild_test
00:24:43.377  ************************************
00:24:43.377   23:58:13	-- bdev/bdev_raid.sh@711 -- # return 0
00:24:43.377  
00:24:43.377  real	0m24.358s
00:24:43.377  user	0m35.216s
00:24:43.377  sys	0m2.610s
00:24:43.377   23:58:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:43.377   23:58:13	-- common/autotest_common.sh@10 -- # set +x
00:24:43.377   23:58:13	-- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false
00:24:43.377   23:58:13	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:24:43.377   23:58:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:43.377   23:58:13	-- common/autotest_common.sh@10 -- # set +x
00:24:43.377  ************************************
00:24:43.377  START TEST raid5f_rebuild_test_sb
00:24:43.377  ************************************
00:24:43.377   23:58:13	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false
00:24:43.377   23:58:13	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:24:43.377   23:58:13	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:24:43.377   23:58:13	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:43.377    23:58:14	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@544 -- # raid_pid=131468
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@545 -- # waitforlisten 131468 /var/tmp/spdk-raid.sock
00:24:43.377   23:58:14	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:24:43.377   23:58:14	-- common/autotest_common.sh@829 -- # '[' -z 131468 ']'
00:24:43.377   23:58:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:24:43.377   23:58:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:43.377   23:58:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:24:43.377  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:24:43.377   23:58:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:43.377   23:58:14	-- common/autotest_common.sh@10 -- # set +x
00:24:43.377  [2024-12-13 23:58:14.079350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:24:43.377  [2024-12-13 23:58:14.079718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131468 ]
00:24:43.377  I/O size of 3145728 is greater than zero copy threshold (65536).
00:24:43.377  Zero copy mechanism will not be used.
00:24:43.636  [2024-12-13 23:58:14.242605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:43.894  [2024-12-13 23:58:14.421786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:43.894  [2024-12-13 23:58:14.609150] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:24:44.461   23:58:14	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:44.461   23:58:14	-- common/autotest_common.sh@862 -- # return 0
00:24:44.461   23:58:14	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:44.461   23:58:14	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:44.461   23:58:14	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:24:44.719  BaseBdev1_malloc
00:24:44.719   23:58:15	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:24:44.719  [2024-12-13 23:58:15.389281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:24:44.719  [2024-12-13 23:58:15.389594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:44.719  [2024-12-13 23:58:15.389670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980
00:24:44.719  [2024-12-13 23:58:15.389829] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:44.719  [2024-12-13 23:58:15.392154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:44.719  [2024-12-13 23:58:15.392338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:24:44.719  BaseBdev1
00:24:44.719   23:58:15	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:44.719   23:58:15	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:44.719   23:58:15	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:24:44.977  BaseBdev2_malloc
00:24:44.977   23:58:15	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:24:45.236  [2024-12-13 23:58:15.851651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:24:45.236  [2024-12-13 23:58:15.851880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:45.236  [2024-12-13 23:58:15.851965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:24:45.236  [2024-12-13 23:58:15.852129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:45.236  [2024-12-13 23:58:15.854422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:45.236  [2024-12-13 23:58:15.854601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:24:45.236  BaseBdev2
00:24:45.236   23:58:15	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:45.236   23:58:15	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:45.236   23:58:15	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:24:45.494  BaseBdev3_malloc
00:24:45.494   23:58:16	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:24:45.753  [2024-12-13 23:58:16.284563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:24:45.753  [2024-12-13 23:58:16.284782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:45.753  [2024-12-13 23:58:16.284862] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:24:45.753  [2024-12-13 23:58:16.285034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:45.753  [2024-12-13 23:58:16.287514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:45.753  [2024-12-13 23:58:16.287701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:24:45.753  BaseBdev3
00:24:45.753   23:58:16	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:45.753   23:58:16	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:45.753   23:58:16	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:24:46.011  BaseBdev4_malloc
00:24:46.011   23:58:16	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:24:46.271  [2024-12-13 23:58:16.778473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:24:46.271  [2024-12-13 23:58:16.778741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:46.271  [2024-12-13 23:58:16.778818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:24:46.271  [2024-12-13 23:58:16.779149] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:46.271  [2024-12-13 23:58:16.781463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:46.271  [2024-12-13 23:58:16.781663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:24:46.271  BaseBdev4
00:24:46.271   23:58:16	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:24:46.271  spare_malloc
00:24:46.271   23:58:16	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:24:46.529  spare_delay
00:24:46.529   23:58:17	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:24:46.788  [2024-12-13 23:58:17.391376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:24:46.788  [2024-12-13 23:58:17.391600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:46.788  [2024-12-13 23:58:17.391672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:24:46.788  [2024-12-13 23:58:17.391947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:46.788  [2024-12-13 23:58:17.394332] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:46.788  [2024-12-13 23:58:17.394519] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:24:46.788  spare
00:24:46.788   23:58:17	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:24:47.047  [2024-12-13 23:58:17.583529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:24:47.047  [2024-12-13 23:58:17.585748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:24:47.047  [2024-12-13 23:58:17.585995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:24:47.047  [2024-12-13 23:58:17.586170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:24:47.047  [2024-12-13 23:58:17.586507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:24:47.047  [2024-12-13 23:58:17.586654] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:47.047  [2024-12-13 23:58:17.586796] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:24:47.047  [2024-12-13 23:58:17.592620] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:24:47.047  [2024-12-13 23:58:17.592735] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:24:47.047  [2024-12-13 23:58:17.592986] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:47.047   23:58:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:47.047    23:58:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:47.047    23:58:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:47.305   23:58:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:47.305    "name": "raid_bdev1",
00:24:47.305    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:47.305    "strip_size_kb": 64,
00:24:47.305    "state": "online",
00:24:47.305    "raid_level": "raid5f",
00:24:47.305    "superblock": true,
00:24:47.305    "num_base_bdevs": 4,
00:24:47.305    "num_base_bdevs_discovered": 4,
00:24:47.305    "num_base_bdevs_operational": 4,
00:24:47.305    "base_bdevs_list": [
00:24:47.305      {
00:24:47.305        "name": "BaseBdev1",
00:24:47.305        "uuid": "4a8325db-c58c-5e64-8542-5feeeef23775",
00:24:47.305        "is_configured": true,
00:24:47.305        "data_offset": 2048,
00:24:47.305        "data_size": 63488
00:24:47.305      },
00:24:47.305      {
00:24:47.305        "name": "BaseBdev2",
00:24:47.305        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:47.305        "is_configured": true,
00:24:47.305        "data_offset": 2048,
00:24:47.305        "data_size": 63488
00:24:47.305      },
00:24:47.305      {
00:24:47.305        "name": "BaseBdev3",
00:24:47.305        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:47.305        "is_configured": true,
00:24:47.305        "data_offset": 2048,
00:24:47.305        "data_size": 63488
00:24:47.305      },
00:24:47.305      {
00:24:47.305        "name": "BaseBdev4",
00:24:47.305        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:47.305        "is_configured": true,
00:24:47.305        "data_offset": 2048,
00:24:47.305        "data_size": 63488
00:24:47.305      }
00:24:47.305    ]
00:24:47.305  }'
00:24:47.305   23:58:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:47.305   23:58:17	-- common/autotest_common.sh@10 -- # set +x
00:24:47.872    23:58:18	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:47.872    23:58:18	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:24:47.872  [2024-12-13 23:58:18.543629] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:47.872   23:58:18	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464
00:24:47.872    23:58:18	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:47.872    23:58:18	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:24:48.131   23:58:18	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:24:48.131   23:58:18	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:24:48.131   23:58:18	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:24:48.131   23:58:18	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@12 -- # local i
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:48.131   23:58:18	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:24:48.389  [2024-12-13 23:58:19.003604] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:24:48.389  /dev/nbd0
00:24:48.389    23:58:19	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:48.389   23:58:19	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:48.389   23:58:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:48.389   23:58:19	-- common/autotest_common.sh@867 -- # local i
00:24:48.389   23:58:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:48.389   23:58:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:48.389   23:58:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:48.389   23:58:19	-- common/autotest_common.sh@871 -- # break
00:24:48.389   23:58:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:48.389   23:58:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:48.389   23:58:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:48.389  1+0 records in
00:24:48.389  1+0 records out
00:24:48.389  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389896 s, 10.5 MB/s
00:24:48.389    23:58:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:48.389   23:58:19	-- common/autotest_common.sh@884 -- # size=4096
00:24:48.389   23:58:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:48.389   23:58:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:48.389   23:58:19	-- common/autotest_common.sh@887 -- # return 0
00:24:48.389   23:58:19	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:48.389   23:58:19	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:48.389   23:58:19	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:24:48.389   23:58:19	-- bdev/bdev_raid.sh@581 -- # write_unit_size=384
00:24:48.389   23:58:19	-- bdev/bdev_raid.sh@582 -- # echo 192
00:24:48.390   23:58:19	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct
00:24:48.956  496+0 records in
00:24:48.956  496+0 records out
00:24:48.956  97517568 bytes (98 MB, 93 MiB) copied, 0.494716 s, 197 MB/s
00:24:48.956   23:58:19	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@51 -- # local i
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:48.956   23:58:19	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:49.215    23:58:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:49.215  [2024-12-13 23:58:19.834713] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@41 -- # break
00:24:49.215   23:58:19	-- bdev/nbd_common.sh@45 -- # return 0
00:24:49.215   23:58:19	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:24:49.473  [2024-12-13 23:58:20.070321] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:49.473   23:58:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:49.473    23:58:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:49.473    23:58:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:49.732   23:58:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:49.732    "name": "raid_bdev1",
00:24:49.732    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:49.732    "strip_size_kb": 64,
00:24:49.732    "state": "online",
00:24:49.732    "raid_level": "raid5f",
00:24:49.732    "superblock": true,
00:24:49.732    "num_base_bdevs": 4,
00:24:49.732    "num_base_bdevs_discovered": 3,
00:24:49.732    "num_base_bdevs_operational": 3,
00:24:49.732    "base_bdevs_list": [
00:24:49.732      {
00:24:49.732        "name": null,
00:24:49.732        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:49.732        "is_configured": false,
00:24:49.732        "data_offset": 2048,
00:24:49.732        "data_size": 63488
00:24:49.732      },
00:24:49.732      {
00:24:49.732        "name": "BaseBdev2",
00:24:49.732        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:49.732        "is_configured": true,
00:24:49.732        "data_offset": 2048,
00:24:49.732        "data_size": 63488
00:24:49.732      },
00:24:49.732      {
00:24:49.732        "name": "BaseBdev3",
00:24:49.732        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:49.732        "is_configured": true,
00:24:49.732        "data_offset": 2048,
00:24:49.732        "data_size": 63488
00:24:49.732      },
00:24:49.732      {
00:24:49.732        "name": "BaseBdev4",
00:24:49.732        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:49.732        "is_configured": true,
00:24:49.732        "data_offset": 2048,
00:24:49.732        "data_size": 63488
00:24:49.732      }
00:24:49.732    ]
00:24:49.732  }'
00:24:49.732   23:58:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:49.732   23:58:20	-- common/autotest_common.sh@10 -- # set +x
00:24:50.298   23:58:20	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:50.556  [2024-12-13 23:58:21.178498] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:50.556  [2024-12-13 23:58:21.178677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:50.556  [2024-12-13 23:58:21.189231] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710
00:24:50.556  [2024-12-13 23:58:21.196434] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:50.556   23:58:21	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:24:51.491   23:58:22	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:51.491   23:58:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:51.491   23:58:22	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:51.491   23:58:22	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:51.491   23:58:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:51.491    23:58:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:51.491    23:58:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:51.750   23:58:22	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:51.750    "name": "raid_bdev1",
00:24:51.750    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:51.750    "strip_size_kb": 64,
00:24:51.750    "state": "online",
00:24:51.750    "raid_level": "raid5f",
00:24:51.750    "superblock": true,
00:24:51.750    "num_base_bdevs": 4,
00:24:51.750    "num_base_bdevs_discovered": 4,
00:24:51.750    "num_base_bdevs_operational": 4,
00:24:51.750    "process": {
00:24:51.750      "type": "rebuild",
00:24:51.750      "target": "spare",
00:24:51.750      "progress": {
00:24:51.750        "blocks": 21120,
00:24:51.750        "percent": 11
00:24:51.750      }
00:24:51.750    },
00:24:51.750    "base_bdevs_list": [
00:24:51.750      {
00:24:51.750        "name": "spare",
00:24:51.750        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:51.750        "is_configured": true,
00:24:51.750        "data_offset": 2048,
00:24:51.750        "data_size": 63488
00:24:51.750      },
00:24:51.750      {
00:24:51.750        "name": "BaseBdev2",
00:24:51.750        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:51.750        "is_configured": true,
00:24:51.750        "data_offset": 2048,
00:24:51.750        "data_size": 63488
00:24:51.750      },
00:24:51.750      {
00:24:51.750        "name": "BaseBdev3",
00:24:51.750        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:51.750        "is_configured": true,
00:24:51.750        "data_offset": 2048,
00:24:51.750        "data_size": 63488
00:24:51.750      },
00:24:51.750      {
00:24:51.750        "name": "BaseBdev4",
00:24:51.750        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:51.750        "is_configured": true,
00:24:51.750        "data_offset": 2048,
00:24:51.750        "data_size": 63488
00:24:51.750      }
00:24:51.750    ]
00:24:51.750  }'
00:24:51.750    23:58:22	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:51.750   23:58:22	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:51.750    23:58:22	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:52.009   23:58:22	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:52.009   23:58:22	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:24:52.009  [2024-12-13 23:58:22.669745] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:24:52.009  [2024-12-13 23:58:22.707953] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:24:52.009  [2024-12-13 23:58:22.708146] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:52.267    23:58:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:52.267    23:58:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:52.267    "name": "raid_bdev1",
00:24:52.267    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:52.267    "strip_size_kb": 64,
00:24:52.267    "state": "online",
00:24:52.267    "raid_level": "raid5f",
00:24:52.267    "superblock": true,
00:24:52.267    "num_base_bdevs": 4,
00:24:52.267    "num_base_bdevs_discovered": 3,
00:24:52.267    "num_base_bdevs_operational": 3,
00:24:52.267    "base_bdevs_list": [
00:24:52.267      {
00:24:52.267        "name": null,
00:24:52.267        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:52.267        "is_configured": false,
00:24:52.267        "data_offset": 2048,
00:24:52.267        "data_size": 63488
00:24:52.267      },
00:24:52.267      {
00:24:52.267        "name": "BaseBdev2",
00:24:52.267        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:52.267        "is_configured": true,
00:24:52.267        "data_offset": 2048,
00:24:52.267        "data_size": 63488
00:24:52.267      },
00:24:52.267      {
00:24:52.267        "name": "BaseBdev3",
00:24:52.267        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:52.267        "is_configured": true,
00:24:52.267        "data_offset": 2048,
00:24:52.267        "data_size": 63488
00:24:52.267      },
00:24:52.267      {
00:24:52.267        "name": "BaseBdev4",
00:24:52.267        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:52.267        "is_configured": true,
00:24:52.267        "data_offset": 2048,
00:24:52.267        "data_size": 63488
00:24:52.267      }
00:24:52.267    ]
00:24:52.267  }'
00:24:52.267   23:58:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:52.267   23:58:22	-- common/autotest_common.sh@10 -- # set +x
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:53.203    23:58:23	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:53.203    23:58:23	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:53.203    "name": "raid_bdev1",
00:24:53.203    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:53.203    "strip_size_kb": 64,
00:24:53.203    "state": "online",
00:24:53.203    "raid_level": "raid5f",
00:24:53.203    "superblock": true,
00:24:53.203    "num_base_bdevs": 4,
00:24:53.203    "num_base_bdevs_discovered": 3,
00:24:53.203    "num_base_bdevs_operational": 3,
00:24:53.203    "base_bdevs_list": [
00:24:53.203      {
00:24:53.203        "name": null,
00:24:53.203        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:53.203        "is_configured": false,
00:24:53.203        "data_offset": 2048,
00:24:53.203        "data_size": 63488
00:24:53.203      },
00:24:53.203      {
00:24:53.203        "name": "BaseBdev2",
00:24:53.203        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:53.203        "is_configured": true,
00:24:53.203        "data_offset": 2048,
00:24:53.203        "data_size": 63488
00:24:53.203      },
00:24:53.203      {
00:24:53.203        "name": "BaseBdev3",
00:24:53.203        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:53.203        "is_configured": true,
00:24:53.203        "data_offset": 2048,
00:24:53.203        "data_size": 63488
00:24:53.203      },
00:24:53.203      {
00:24:53.203        "name": "BaseBdev4",
00:24:53.203        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:53.203        "is_configured": true,
00:24:53.203        "data_offset": 2048,
00:24:53.203        "data_size": 63488
00:24:53.203      }
00:24:53.203    ]
00:24:53.203  }'
00:24:53.203    23:58:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:53.203    23:58:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:53.203   23:58:23	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:53.462  [2024-12-13 23:58:24.139338] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:53.462  [2024-12-13 23:58:24.139518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:53.462  [2024-12-13 23:58:24.149292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0
00:24:53.462  [2024-12-13 23:58:24.156040] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:53.462   23:58:24	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:54.837    23:58:25	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:54.837    23:58:25	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:54.837    "name": "raid_bdev1",
00:24:54.837    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:54.837    "strip_size_kb": 64,
00:24:54.837    "state": "online",
00:24:54.837    "raid_level": "raid5f",
00:24:54.837    "superblock": true,
00:24:54.837    "num_base_bdevs": 4,
00:24:54.837    "num_base_bdevs_discovered": 4,
00:24:54.837    "num_base_bdevs_operational": 4,
00:24:54.837    "process": {
00:24:54.837      "type": "rebuild",
00:24:54.837      "target": "spare",
00:24:54.837      "progress": {
00:24:54.837        "blocks": 23040,
00:24:54.837        "percent": 12
00:24:54.837      }
00:24:54.837    },
00:24:54.837    "base_bdevs_list": [
00:24:54.837      {
00:24:54.837        "name": "spare",
00:24:54.837        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:54.837        "is_configured": true,
00:24:54.837        "data_offset": 2048,
00:24:54.837        "data_size": 63488
00:24:54.837      },
00:24:54.837      {
00:24:54.837        "name": "BaseBdev2",
00:24:54.837        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:54.837        "is_configured": true,
00:24:54.837        "data_offset": 2048,
00:24:54.837        "data_size": 63488
00:24:54.837      },
00:24:54.837      {
00:24:54.837        "name": "BaseBdev3",
00:24:54.837        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:54.837        "is_configured": true,
00:24:54.837        "data_offset": 2048,
00:24:54.837        "data_size": 63488
00:24:54.837      },
00:24:54.837      {
00:24:54.837        "name": "BaseBdev4",
00:24:54.837        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:54.837        "is_configured": true,
00:24:54.837        "data_offset": 2048,
00:24:54.837        "data_size": 63488
00:24:54.837      }
00:24:54.837    ]
00:24:54.837  }'
00:24:54.837    23:58:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:54.837    23:58:25	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:24:54.837   23:58:25	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:24:54.837  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@657 -- # local timeout=716
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:54.838   23:58:25	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:54.838    23:58:25	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:54.838    23:58:25	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:55.096   23:58:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:55.096    "name": "raid_bdev1",
00:24:55.096    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:55.096    "strip_size_kb": 64,
00:24:55.096    "state": "online",
00:24:55.096    "raid_level": "raid5f",
00:24:55.096    "superblock": true,
00:24:55.096    "num_base_bdevs": 4,
00:24:55.096    "num_base_bdevs_discovered": 4,
00:24:55.096    "num_base_bdevs_operational": 4,
00:24:55.096    "process": {
00:24:55.096      "type": "rebuild",
00:24:55.096      "target": "spare",
00:24:55.096      "progress": {
00:24:55.096        "blocks": 28800,
00:24:55.096        "percent": 15
00:24:55.096      }
00:24:55.096    },
00:24:55.096    "base_bdevs_list": [
00:24:55.096      {
00:24:55.096        "name": "spare",
00:24:55.096        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:55.096        "is_configured": true,
00:24:55.096        "data_offset": 2048,
00:24:55.096        "data_size": 63488
00:24:55.096      },
00:24:55.096      {
00:24:55.096        "name": "BaseBdev2",
00:24:55.096        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:55.096        "is_configured": true,
00:24:55.096        "data_offset": 2048,
00:24:55.096        "data_size": 63488
00:24:55.096      },
00:24:55.096      {
00:24:55.096        "name": "BaseBdev3",
00:24:55.096        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:55.096        "is_configured": true,
00:24:55.096        "data_offset": 2048,
00:24:55.096        "data_size": 63488
00:24:55.096      },
00:24:55.096      {
00:24:55.096        "name": "BaseBdev4",
00:24:55.096        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:55.096        "is_configured": true,
00:24:55.096        "data_offset": 2048,
00:24:55.096        "data_size": 63488
00:24:55.096      }
00:24:55.096    ]
00:24:55.096  }'
00:24:55.096    23:58:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:55.096   23:58:25	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:55.096    23:58:25	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:55.354   23:58:25	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:55.354   23:58:25	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:56.290   23:58:26	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:56.290    23:58:26	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:56.290    23:58:26	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:56.548   23:58:27	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:56.548    "name": "raid_bdev1",
00:24:56.548    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:56.548    "strip_size_kb": 64,
00:24:56.548    "state": "online",
00:24:56.548    "raid_level": "raid5f",
00:24:56.548    "superblock": true,
00:24:56.548    "num_base_bdevs": 4,
00:24:56.548    "num_base_bdevs_discovered": 4,
00:24:56.548    "num_base_bdevs_operational": 4,
00:24:56.548    "process": {
00:24:56.548      "type": "rebuild",
00:24:56.548      "target": "spare",
00:24:56.548      "progress": {
00:24:56.548        "blocks": 53760,
00:24:56.548        "percent": 28
00:24:56.548      }
00:24:56.548    },
00:24:56.548    "base_bdevs_list": [
00:24:56.548      {
00:24:56.548        "name": "spare",
00:24:56.548        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:56.548        "is_configured": true,
00:24:56.548        "data_offset": 2048,
00:24:56.548        "data_size": 63488
00:24:56.548      },
00:24:56.548      {
00:24:56.548        "name": "BaseBdev2",
00:24:56.548        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:56.548        "is_configured": true,
00:24:56.548        "data_offset": 2048,
00:24:56.548        "data_size": 63488
00:24:56.548      },
00:24:56.548      {
00:24:56.548        "name": "BaseBdev3",
00:24:56.548        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:56.548        "is_configured": true,
00:24:56.548        "data_offset": 2048,
00:24:56.548        "data_size": 63488
00:24:56.548      },
00:24:56.548      {
00:24:56.548        "name": "BaseBdev4",
00:24:56.548        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:56.548        "is_configured": true,
00:24:56.548        "data_offset": 2048,
00:24:56.548        "data_size": 63488
00:24:56.548      }
00:24:56.548    ]
00:24:56.548  }'
00:24:56.548    23:58:27	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:56.548   23:58:27	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:56.548    23:58:27	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:56.548   23:58:27	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:56.548   23:58:27	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:57.484   23:58:28	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:57.484    23:58:28	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:57.484    23:58:28	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:57.743   23:58:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:57.743    "name": "raid_bdev1",
00:24:57.743    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:57.743    "strip_size_kb": 64,
00:24:57.743    "state": "online",
00:24:57.743    "raid_level": "raid5f",
00:24:57.743    "superblock": true,
00:24:57.743    "num_base_bdevs": 4,
00:24:57.743    "num_base_bdevs_discovered": 4,
00:24:57.743    "num_base_bdevs_operational": 4,
00:24:57.743    "process": {
00:24:57.743      "type": "rebuild",
00:24:57.743      "target": "spare",
00:24:57.743      "progress": {
00:24:57.743        "blocks": 78720,
00:24:57.743        "percent": 41
00:24:57.743      }
00:24:57.743    },
00:24:57.743    "base_bdevs_list": [
00:24:57.743      {
00:24:57.743        "name": "spare",
00:24:57.743        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:57.743        "is_configured": true,
00:24:57.743        "data_offset": 2048,
00:24:57.743        "data_size": 63488
00:24:57.743      },
00:24:57.743      {
00:24:57.743        "name": "BaseBdev2",
00:24:57.743        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:57.743        "is_configured": true,
00:24:57.743        "data_offset": 2048,
00:24:57.743        "data_size": 63488
00:24:57.743      },
00:24:57.743      {
00:24:57.743        "name": "BaseBdev3",
00:24:57.743        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:57.743        "is_configured": true,
00:24:57.743        "data_offset": 2048,
00:24:57.743        "data_size": 63488
00:24:57.743      },
00:24:57.743      {
00:24:57.743        "name": "BaseBdev4",
00:24:57.743        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:57.743        "is_configured": true,
00:24:57.743        "data_offset": 2048,
00:24:57.743        "data_size": 63488
00:24:57.743      }
00:24:57.743    ]
00:24:57.743  }'
00:24:57.743    23:58:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:57.743   23:58:28	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:57.743    23:58:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:58.001   23:58:28	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:58.001   23:58:28	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:58.937   23:58:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:58.937    23:58:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:58.937    23:58:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:59.195   23:58:29	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:59.195    "name": "raid_bdev1",
00:24:59.195    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:24:59.195    "strip_size_kb": 64,
00:24:59.195    "state": "online",
00:24:59.195    "raid_level": "raid5f",
00:24:59.195    "superblock": true,
00:24:59.195    "num_base_bdevs": 4,
00:24:59.195    "num_base_bdevs_discovered": 4,
00:24:59.195    "num_base_bdevs_operational": 4,
00:24:59.195    "process": {
00:24:59.195      "type": "rebuild",
00:24:59.195      "target": "spare",
00:24:59.195      "progress": {
00:24:59.195        "blocks": 105600,
00:24:59.195        "percent": 55
00:24:59.195      }
00:24:59.195    },
00:24:59.195    "base_bdevs_list": [
00:24:59.195      {
00:24:59.195        "name": "spare",
00:24:59.195        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:24:59.195        "is_configured": true,
00:24:59.195        "data_offset": 2048,
00:24:59.196        "data_size": 63488
00:24:59.196      },
00:24:59.196      {
00:24:59.196        "name": "BaseBdev2",
00:24:59.196        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:24:59.196        "is_configured": true,
00:24:59.196        "data_offset": 2048,
00:24:59.196        "data_size": 63488
00:24:59.196      },
00:24:59.196      {
00:24:59.196        "name": "BaseBdev3",
00:24:59.196        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:24:59.196        "is_configured": true,
00:24:59.196        "data_offset": 2048,
00:24:59.196        "data_size": 63488
00:24:59.196      },
00:24:59.196      {
00:24:59.196        "name": "BaseBdev4",
00:24:59.196        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:24:59.196        "is_configured": true,
00:24:59.196        "data_offset": 2048,
00:24:59.196        "data_size": 63488
00:24:59.196      }
00:24:59.196    ]
00:24:59.196  }'
00:24:59.196    23:58:29	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:59.196   23:58:29	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:59.196    23:58:29	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:59.196   23:58:29	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:59.196   23:58:29	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:25:00.149   23:58:30	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:00.149    23:58:30	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:00.149    23:58:30	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:00.417   23:58:31	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:00.417    "name": "raid_bdev1",
00:25:00.417    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:00.417    "strip_size_kb": 64,
00:25:00.417    "state": "online",
00:25:00.417    "raid_level": "raid5f",
00:25:00.417    "superblock": true,
00:25:00.417    "num_base_bdevs": 4,
00:25:00.417    "num_base_bdevs_discovered": 4,
00:25:00.417    "num_base_bdevs_operational": 4,
00:25:00.417    "process": {
00:25:00.417      "type": "rebuild",
00:25:00.417      "target": "spare",
00:25:00.417      "progress": {
00:25:00.417        "blocks": 130560,
00:25:00.417        "percent": 68
00:25:00.417      }
00:25:00.417    },
00:25:00.417    "base_bdevs_list": [
00:25:00.417      {
00:25:00.417        "name": "spare",
00:25:00.417        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:00.417        "is_configured": true,
00:25:00.417        "data_offset": 2048,
00:25:00.417        "data_size": 63488
00:25:00.417      },
00:25:00.417      {
00:25:00.417        "name": "BaseBdev2",
00:25:00.417        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:00.417        "is_configured": true,
00:25:00.417        "data_offset": 2048,
00:25:00.417        "data_size": 63488
00:25:00.417      },
00:25:00.417      {
00:25:00.417        "name": "BaseBdev3",
00:25:00.417        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:00.417        "is_configured": true,
00:25:00.417        "data_offset": 2048,
00:25:00.417        "data_size": 63488
00:25:00.417      },
00:25:00.417      {
00:25:00.417        "name": "BaseBdev4",
00:25:00.417        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:00.417        "is_configured": true,
00:25:00.417        "data_offset": 2048,
00:25:00.417        "data_size": 63488
00:25:00.417      }
00:25:00.417    ]
00:25:00.417  }'
00:25:00.417    23:58:31	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:00.417   23:58:31	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:25:00.417    23:58:31	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:00.676   23:58:31	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:25:00.676   23:58:31	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:25:01.612   23:58:32	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:01.612    23:58:32	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:01.612    23:58:32	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:01.870   23:58:32	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:01.870    "name": "raid_bdev1",
00:25:01.870    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:01.870    "strip_size_kb": 64,
00:25:01.870    "state": "online",
00:25:01.870    "raid_level": "raid5f",
00:25:01.870    "superblock": true,
00:25:01.870    "num_base_bdevs": 4,
00:25:01.870    "num_base_bdevs_discovered": 4,
00:25:01.870    "num_base_bdevs_operational": 4,
00:25:01.870    "process": {
00:25:01.870      "type": "rebuild",
00:25:01.870      "target": "spare",
00:25:01.870      "progress": {
00:25:01.870        "blocks": 155520,
00:25:01.870        "percent": 81
00:25:01.870      }
00:25:01.870    },
00:25:01.870    "base_bdevs_list": [
00:25:01.870      {
00:25:01.870        "name": "spare",
00:25:01.870        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:01.870        "is_configured": true,
00:25:01.870        "data_offset": 2048,
00:25:01.870        "data_size": 63488
00:25:01.870      },
00:25:01.870      {
00:25:01.870        "name": "BaseBdev2",
00:25:01.870        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:01.870        "is_configured": true,
00:25:01.870        "data_offset": 2048,
00:25:01.870        "data_size": 63488
00:25:01.870      },
00:25:01.870      {
00:25:01.870        "name": "BaseBdev3",
00:25:01.870        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:01.870        "is_configured": true,
00:25:01.870        "data_offset": 2048,
00:25:01.870        "data_size": 63488
00:25:01.870      },
00:25:01.870      {
00:25:01.870        "name": "BaseBdev4",
00:25:01.870        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:01.870        "is_configured": true,
00:25:01.870        "data_offset": 2048,
00:25:01.870        "data_size": 63488
00:25:01.870      }
00:25:01.870    ]
00:25:01.870  }'
00:25:01.870    23:58:32	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:01.870   23:58:32	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:25:01.870    23:58:32	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:01.870   23:58:32	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:25:01.870   23:58:32	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:25:02.806   23:58:33	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:02.806    23:58:33	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:02.806    23:58:33	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:03.065   23:58:33	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:03.065    "name": "raid_bdev1",
00:25:03.065    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:03.065    "strip_size_kb": 64,
00:25:03.065    "state": "online",
00:25:03.065    "raid_level": "raid5f",
00:25:03.065    "superblock": true,
00:25:03.065    "num_base_bdevs": 4,
00:25:03.065    "num_base_bdevs_discovered": 4,
00:25:03.065    "num_base_bdevs_operational": 4,
00:25:03.065    "process": {
00:25:03.065      "type": "rebuild",
00:25:03.065      "target": "spare",
00:25:03.065      "progress": {
00:25:03.065        "blocks": 182400,
00:25:03.065        "percent": 95
00:25:03.065      }
00:25:03.065    },
00:25:03.065    "base_bdevs_list": [
00:25:03.065      {
00:25:03.065        "name": "spare",
00:25:03.065        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:03.065        "is_configured": true,
00:25:03.065        "data_offset": 2048,
00:25:03.065        "data_size": 63488
00:25:03.065      },
00:25:03.065      {
00:25:03.065        "name": "BaseBdev2",
00:25:03.065        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:03.065        "is_configured": true,
00:25:03.065        "data_offset": 2048,
00:25:03.065        "data_size": 63488
00:25:03.065      },
00:25:03.065      {
00:25:03.065        "name": "BaseBdev3",
00:25:03.065        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:03.065        "is_configured": true,
00:25:03.065        "data_offset": 2048,
00:25:03.065        "data_size": 63488
00:25:03.065      },
00:25:03.065      {
00:25:03.065        "name": "BaseBdev4",
00:25:03.065        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:03.065        "is_configured": true,
00:25:03.065        "data_offset": 2048,
00:25:03.066        "data_size": 63488
00:25:03.066      }
00:25:03.066    ]
00:25:03.066  }'
00:25:03.066    23:58:33	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:03.324   23:58:33	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:25:03.324    23:58:33	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:03.324   23:58:33	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:25:03.324   23:58:33	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:25:03.583  [2024-12-13 23:58:34.228348] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:25:03.583  [2024-12-13 23:58:34.228559] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:25:03.583  [2024-12-13 23:58:34.228842] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:25:04.149   23:58:34	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:04.149    23:58:34	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:04.149    23:58:34	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:04.408   23:58:35	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:04.408    "name": "raid_bdev1",
00:25:04.408    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:04.408    "strip_size_kb": 64,
00:25:04.408    "state": "online",
00:25:04.408    "raid_level": "raid5f",
00:25:04.408    "superblock": true,
00:25:04.408    "num_base_bdevs": 4,
00:25:04.408    "num_base_bdevs_discovered": 4,
00:25:04.408    "num_base_bdevs_operational": 4,
00:25:04.408    "base_bdevs_list": [
00:25:04.408      {
00:25:04.408        "name": "spare",
00:25:04.408        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:04.408        "is_configured": true,
00:25:04.408        "data_offset": 2048,
00:25:04.408        "data_size": 63488
00:25:04.408      },
00:25:04.408      {
00:25:04.408        "name": "BaseBdev2",
00:25:04.408        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:04.408        "is_configured": true,
00:25:04.408        "data_offset": 2048,
00:25:04.408        "data_size": 63488
00:25:04.408      },
00:25:04.408      {
00:25:04.408        "name": "BaseBdev3",
00:25:04.408        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:04.408        "is_configured": true,
00:25:04.408        "data_offset": 2048,
00:25:04.408        "data_size": 63488
00:25:04.408      },
00:25:04.408      {
00:25:04.408        "name": "BaseBdev4",
00:25:04.408        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:04.408        "is_configured": true,
00:25:04.408        "data_offset": 2048,
00:25:04.408        "data_size": 63488
00:25:04.408      }
00:25:04.408    ]
00:25:04.408  }'
00:25:04.408    23:58:35	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:25:04.666    23:58:35	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@660 -- # break
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@185 -- # local target=none
00:25:04.666   23:58:35	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:04.666    23:58:35	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:04.666    23:58:35	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:04.924    "name": "raid_bdev1",
00:25:04.924    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:04.924    "strip_size_kb": 64,
00:25:04.924    "state": "online",
00:25:04.924    "raid_level": "raid5f",
00:25:04.924    "superblock": true,
00:25:04.924    "num_base_bdevs": 4,
00:25:04.924    "num_base_bdevs_discovered": 4,
00:25:04.924    "num_base_bdevs_operational": 4,
00:25:04.924    "base_bdevs_list": [
00:25:04.924      {
00:25:04.924        "name": "spare",
00:25:04.924        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:04.924        "is_configured": true,
00:25:04.924        "data_offset": 2048,
00:25:04.924        "data_size": 63488
00:25:04.924      },
00:25:04.924      {
00:25:04.924        "name": "BaseBdev2",
00:25:04.924        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:04.924        "is_configured": true,
00:25:04.924        "data_offset": 2048,
00:25:04.924        "data_size": 63488
00:25:04.924      },
00:25:04.924      {
00:25:04.924        "name": "BaseBdev3",
00:25:04.924        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:04.924        "is_configured": true,
00:25:04.924        "data_offset": 2048,
00:25:04.924        "data_size": 63488
00:25:04.924      },
00:25:04.924      {
00:25:04.924        "name": "BaseBdev4",
00:25:04.924        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:04.924        "is_configured": true,
00:25:04.924        "data_offset": 2048,
00:25:04.924        "data_size": 63488
00:25:04.924      }
00:25:04.924    ]
00:25:04.924  }'
00:25:04.924    23:58:35	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:25:04.924    23:58:35	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:25:04.924   23:58:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:25:04.924    23:58:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:04.924    23:58:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:05.183   23:58:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:25:05.183    "name": "raid_bdev1",
00:25:05.183    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:05.183    "strip_size_kb": 64,
00:25:05.183    "state": "online",
00:25:05.183    "raid_level": "raid5f",
00:25:05.183    "superblock": true,
00:25:05.183    "num_base_bdevs": 4,
00:25:05.183    "num_base_bdevs_discovered": 4,
00:25:05.183    "num_base_bdevs_operational": 4,
00:25:05.183    "base_bdevs_list": [
00:25:05.183      {
00:25:05.183        "name": "spare",
00:25:05.183        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:05.183        "is_configured": true,
00:25:05.183        "data_offset": 2048,
00:25:05.183        "data_size": 63488
00:25:05.183      },
00:25:05.183      {
00:25:05.183        "name": "BaseBdev2",
00:25:05.183        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:05.183        "is_configured": true,
00:25:05.183        "data_offset": 2048,
00:25:05.183        "data_size": 63488
00:25:05.183      },
00:25:05.183      {
00:25:05.183        "name": "BaseBdev3",
00:25:05.183        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:05.183        "is_configured": true,
00:25:05.183        "data_offset": 2048,
00:25:05.183        "data_size": 63488
00:25:05.183      },
00:25:05.183      {
00:25:05.183        "name": "BaseBdev4",
00:25:05.183        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:05.183        "is_configured": true,
00:25:05.183        "data_offset": 2048,
00:25:05.183        "data_size": 63488
00:25:05.183      }
00:25:05.183    ]
00:25:05.183  }'
00:25:05.183   23:58:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:25:05.183   23:58:35	-- common/autotest_common.sh@10 -- # set +x
00:25:05.750   23:58:36	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:25:06.008  [2024-12-13 23:58:36.624599] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:25:06.008  [2024-12-13 23:58:36.624762] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:25:06.009  [2024-12-13 23:58:36.624940] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:25:06.009  [2024-12-13 23:58:36.625171] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:25:06.009  [2024-12-13 23:58:36.625286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:25:06.009    23:58:36	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:06.009    23:58:36	-- bdev/bdev_raid.sh@671 -- # jq length
00:25:06.267   23:58:36	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:25:06.267   23:58:36	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:25:06.267   23:58:36	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@12 -- # local i
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:25:06.267   23:58:36	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:25:06.525  /dev/nbd0
00:25:06.525    23:58:37	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:25:06.525   23:58:37	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:25:06.525   23:58:37	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:25:06.525   23:58:37	-- common/autotest_common.sh@867 -- # local i
00:25:06.525   23:58:37	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:25:06.525   23:58:37	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:25:06.525   23:58:37	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:25:06.525   23:58:37	-- common/autotest_common.sh@871 -- # break
00:25:06.525   23:58:37	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:25:06.525   23:58:37	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:25:06.525   23:58:37	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:25:06.525  1+0 records in
00:25:06.525  1+0 records out
00:25:06.525  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384483 s, 10.7 MB/s
00:25:06.525    23:58:37	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:25:06.525   23:58:37	-- common/autotest_common.sh@884 -- # size=4096
00:25:06.525   23:58:37	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:25:06.525   23:58:37	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:25:06.525   23:58:37	-- common/autotest_common.sh@887 -- # return 0
00:25:06.525   23:58:37	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:25:06.525   23:58:37	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:25:06.525   23:58:37	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:25:06.783  /dev/nbd1
00:25:06.783    23:58:37	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:25:06.783   23:58:37	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:25:06.783   23:58:37	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:25:06.783   23:58:37	-- common/autotest_common.sh@867 -- # local i
00:25:06.783   23:58:37	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:25:06.783   23:58:37	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:25:06.783   23:58:37	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:25:06.783   23:58:37	-- common/autotest_common.sh@871 -- # break
00:25:06.783   23:58:37	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:25:06.783   23:58:37	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:25:06.783   23:58:37	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:25:06.783  1+0 records in
00:25:06.783  1+0 records out
00:25:06.783  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506922 s, 8.1 MB/s
00:25:06.783    23:58:37	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:25:06.783   23:58:37	-- common/autotest_common.sh@884 -- # size=4096
00:25:06.783   23:58:37	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:25:06.783   23:58:37	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:25:06.784   23:58:37	-- common/autotest_common.sh@887 -- # return 0
00:25:06.784   23:58:37	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:25:06.784   23:58:37	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:25:06.784   23:58:37	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:25:07.042   23:58:37	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@51 -- # local i
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:25:07.042   23:58:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:25:07.300    23:58:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@41 -- # break
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@45 -- # return 0
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:25:07.300   23:58:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:25:07.558    23:58:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@41 -- # break
00:25:07.558   23:58:38	-- bdev/nbd_common.sh@45 -- # return 0
00:25:07.558   23:58:38	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:25:07.558   23:58:38	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:25:07.558   23:58:38	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:25:07.558   23:58:38	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:25:07.558   23:58:38	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:25:07.817  [2024-12-13 23:58:38.528229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:25:07.817  [2024-12-13 23:58:38.528447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:25:07.817  [2024-12-13 23:58:38.528540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:25:07.817  [2024-12-13 23:58:38.528670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:25:07.817  [2024-12-13 23:58:38.531024] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:25:07.817  [2024-12-13 23:58:38.531217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:25:07.817  [2024-12-13 23:58:38.531427] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:25:07.817  [2024-12-13 23:58:38.531630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:25:07.817  BaseBdev1
00:25:07.817   23:58:38	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:25:07.817   23:58:38	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:25:07.817   23:58:38	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:25:08.076   23:58:38	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:25:08.335  [2024-12-13 23:58:39.023470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:25:08.335  [2024-12-13 23:58:39.023660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:25:08.335  [2024-12-13 23:58:39.023737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:25:08.335  [2024-12-13 23:58:39.023853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:25:08.335  [2024-12-13 23:58:39.024377] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:25:08.335  [2024-12-13 23:58:39.024558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:25:08.335  [2024-12-13 23:58:39.024773] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:25:08.335  [2024-12-13 23:58:39.024881] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:25:08.335  [2024-12-13 23:58:39.024980] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:25:08.335  [2024-12-13 23:58:39.025032] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring
00:25:08.335  [2024-12-13 23:58:39.025181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:25:08.335  BaseBdev2
00:25:08.335   23:58:39	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:25:08.335   23:58:39	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:25:08.335   23:58:39	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:25:08.594   23:58:39	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:25:08.852  [2024-12-13 23:58:39.399532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:25:08.852  [2024-12-13 23:58:39.399719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:25:08.852  [2024-12-13 23:58:39.399797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:25:08.853  [2024-12-13 23:58:39.399924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:25:08.853  [2024-12-13 23:58:39.400327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:25:08.853  [2024-12-13 23:58:39.400475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:25:08.853  [2024-12-13 23:58:39.400662] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:25:08.853  [2024-12-13 23:58:39.400769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:25:08.853  BaseBdev3
00:25:08.853   23:58:39	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:25:08.853   23:58:39	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:25:08.853   23:58:39	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:25:09.111   23:58:39	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:25:09.111  [2024-12-13 23:58:39.821436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:25:09.111  [2024-12-13 23:58:39.821635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:25:09.111  [2024-12-13 23:58:39.821708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:25:09.111  [2024-12-13 23:58:39.821858] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:25:09.111  [2024-12-13 23:58:39.822273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:25:09.111  [2024-12-13 23:58:39.822450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:25:09.111  [2024-12-13 23:58:39.822660] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:25:09.111  [2024-12-13 23:58:39.822807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:25:09.111  BaseBdev4
00:25:09.111   23:58:39	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:25:09.370   23:58:40	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:25:09.629  [2024-12-13 23:58:40.193494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:25:09.629  [2024-12-13 23:58:40.193687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:25:09.629  [2024-12-13 23:58:40.193751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980
00:25:09.629  [2024-12-13 23:58:40.193879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:25:09.629  [2024-12-13 23:58:40.194330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:25:09.629  [2024-12-13 23:58:40.194513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:25:09.629  [2024-12-13 23:58:40.194712] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:25:09.629  [2024-12-13 23:58:40.194833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:25:09.629  spare
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:25:09.629   23:58:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:25:09.629    23:58:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:09.629    23:58:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:09.629  [2024-12-13 23:58:40.294988] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080
00:25:09.629  [2024-12-13 23:58:40.295106] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:25:09.629  [2024-12-13 23:58:40.295247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510
00:25:09.629  [2024-12-13 23:58:40.300399] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080
00:25:09.629  [2024-12-13 23:58:40.300513] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080
00:25:09.629  [2024-12-13 23:58:40.300767] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:25:09.888   23:58:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:25:09.888    "name": "raid_bdev1",
00:25:09.888    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:09.888    "strip_size_kb": 64,
00:25:09.888    "state": "online",
00:25:09.888    "raid_level": "raid5f",
00:25:09.888    "superblock": true,
00:25:09.888    "num_base_bdevs": 4,
00:25:09.888    "num_base_bdevs_discovered": 4,
00:25:09.888    "num_base_bdevs_operational": 4,
00:25:09.888    "base_bdevs_list": [
00:25:09.888      {
00:25:09.888        "name": "spare",
00:25:09.888        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:09.888        "is_configured": true,
00:25:09.888        "data_offset": 2048,
00:25:09.888        "data_size": 63488
00:25:09.888      },
00:25:09.888      {
00:25:09.888        "name": "BaseBdev2",
00:25:09.888        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:09.888        "is_configured": true,
00:25:09.888        "data_offset": 2048,
00:25:09.888        "data_size": 63488
00:25:09.888      },
00:25:09.888      {
00:25:09.888        "name": "BaseBdev3",
00:25:09.888        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:09.888        "is_configured": true,
00:25:09.888        "data_offset": 2048,
00:25:09.888        "data_size": 63488
00:25:09.888      },
00:25:09.888      {
00:25:09.888        "name": "BaseBdev4",
00:25:09.888        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:09.888        "is_configured": true,
00:25:09.888        "data_offset": 2048,
00:25:09.888        "data_size": 63488
00:25:09.888      }
00:25:09.888    ]
00:25:09.888  }'
00:25:09.888   23:58:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:25:09.888   23:58:40	-- common/autotest_common.sh@10 -- # set +x
00:25:10.455   23:58:41	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:25:10.455   23:58:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:25:10.455   23:58:41	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:25:10.455   23:58:41	-- bdev/bdev_raid.sh@185 -- # local target=none
00:25:10.455   23:58:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:25:10.455    23:58:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:10.455    23:58:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:25:10.713   23:58:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:25:10.713    "name": "raid_bdev1",
00:25:10.713    "uuid": "6c6823b7-cdf0-4403-bf11-d1b109c3ab67",
00:25:10.713    "strip_size_kb": 64,
00:25:10.713    "state": "online",
00:25:10.713    "raid_level": "raid5f",
00:25:10.713    "superblock": true,
00:25:10.713    "num_base_bdevs": 4,
00:25:10.713    "num_base_bdevs_discovered": 4,
00:25:10.713    "num_base_bdevs_operational": 4,
00:25:10.713    "base_bdevs_list": [
00:25:10.713      {
00:25:10.713        "name": "spare",
00:25:10.713        "uuid": "ad453f55-ff21-5c42-b5e8-99c555d9858a",
00:25:10.713        "is_configured": true,
00:25:10.713        "data_offset": 2048,
00:25:10.713        "data_size": 63488
00:25:10.713      },
00:25:10.713      {
00:25:10.713        "name": "BaseBdev2",
00:25:10.713        "uuid": "faf19adf-cee6-50fb-b4ff-e3a8deaea8ef",
00:25:10.713        "is_configured": true,
00:25:10.713        "data_offset": 2048,
00:25:10.713        "data_size": 63488
00:25:10.713      },
00:25:10.713      {
00:25:10.713        "name": "BaseBdev3",
00:25:10.713        "uuid": "ac2a0824-9644-5669-8e8b-9cafc7db0e7a",
00:25:10.713        "is_configured": true,
00:25:10.713        "data_offset": 2048,
00:25:10.713        "data_size": 63488
00:25:10.713      },
00:25:10.713      {
00:25:10.713        "name": "BaseBdev4",
00:25:10.713        "uuid": "ecc2a2e7-90ce-563d-9580-b222bc98d0e1",
00:25:10.713        "is_configured": true,
00:25:10.713        "data_offset": 2048,
00:25:10.713        "data_size": 63488
00:25:10.713      }
00:25:10.713    ]
00:25:10.713  }'
00:25:10.713    23:58:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:25:10.714   23:58:41	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:25:10.714    23:58:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:25:10.714   23:58:41	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:25:10.714    23:58:41	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:25:10.714    23:58:41	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:25:10.972   23:58:41	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:25:10.972   23:58:41	-- bdev/bdev_raid.sh@709 -- # killprocess 131468
00:25:10.972   23:58:41	-- common/autotest_common.sh@936 -- # '[' -z 131468 ']'
00:25:10.972   23:58:41	-- common/autotest_common.sh@940 -- # kill -0 131468
00:25:10.972    23:58:41	-- common/autotest_common.sh@941 -- # uname
00:25:10.972   23:58:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:10.972    23:58:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131468
00:25:10.972  killing process with pid 131468
00:25:10.972  Received shutdown signal, test time was about 60.000000 seconds
00:25:10.972  
00:25:10.972                                                                                                  Latency(us)
00:25:10.972  
[2024-12-13T23:58:41.704Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:10.972  
[2024-12-13T23:58:41.704Z]  ===================================================================================================================
00:25:10.972  
[2024-12-13T23:58:41.704Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:25:10.972   23:58:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:10.972   23:58:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:10.972   23:58:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 131468'
00:25:10.972   23:58:41	-- common/autotest_common.sh@955 -- # kill 131468
00:25:10.972   23:58:41	-- common/autotest_common.sh@960 -- # wait 131468
00:25:10.972  [2024-12-13 23:58:41.641821] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:25:10.972  [2024-12-13 23:58:41.641883] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:25:10.972  [2024-12-13 23:58:41.641971] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:25:10.972  [2024-12-13 23:58:41.641983] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline
00:25:11.539  [2024-12-13 23:58:41.972546] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:25:12.475   23:58:42	-- bdev/bdev_raid.sh@711 -- # return 0
00:25:12.475  
00:25:12.475  real	0m28.979s
00:25:12.475  user	0m43.885s
00:25:12.475  sys	0m3.194s
00:25:12.475   23:58:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:12.475  ************************************
00:25:12.475  END TEST raid5f_rebuild_test_sb
00:25:12.475  ************************************
00:25:12.475   23:58:42	-- common/autotest_common.sh@10 -- # set +x
00:25:12.475   23:58:43	-- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest
00:25:12.475  ************************************
00:25:12.475  END TEST bdev_raid
00:25:12.475  ************************************
00:25:12.475  
00:25:12.475  real	11m43.848s
00:25:12.475  user	19m21.372s
00:25:12.475  sys	1m29.324s
00:25:12.475   23:58:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:12.475   23:58:43	-- common/autotest_common.sh@10 -- # set +x
00:25:12.475   23:58:43	-- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:25:12.475   23:58:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:12.475   23:58:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:12.475   23:58:43	-- common/autotest_common.sh@10 -- # set +x
00:25:12.475  ************************************
00:25:12.475  START TEST bdevperf_config
00:25:12.475  ************************************
00:25:12.475   23:58:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:25:12.475  * Looking for test storage...
00:25:12.475  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:25:12.475    23:58:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:12.475     23:58:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:12.475     23:58:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:12.735    23:58:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:12.735    23:58:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:12.735    23:58:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:12.735    23:58:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:12.735    23:58:43	-- scripts/common.sh@335 -- # IFS=.-:
00:25:12.735    23:58:43	-- scripts/common.sh@335 -- # read -ra ver1
00:25:12.735    23:58:43	-- scripts/common.sh@336 -- # IFS=.-:
00:25:12.735    23:58:43	-- scripts/common.sh@336 -- # read -ra ver2
00:25:12.735    23:58:43	-- scripts/common.sh@337 -- # local 'op=<'
00:25:12.735    23:58:43	-- scripts/common.sh@339 -- # ver1_l=2
00:25:12.735    23:58:43	-- scripts/common.sh@340 -- # ver2_l=1
00:25:12.735    23:58:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:12.735    23:58:43	-- scripts/common.sh@343 -- # case "$op" in
00:25:12.735    23:58:43	-- scripts/common.sh@344 -- # : 1
00:25:12.735    23:58:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:12.735    23:58:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:12.735     23:58:43	-- scripts/common.sh@364 -- # decimal 1
00:25:12.735     23:58:43	-- scripts/common.sh@352 -- # local d=1
00:25:12.735     23:58:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:12.735     23:58:43	-- scripts/common.sh@354 -- # echo 1
00:25:12.735    23:58:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:12.735     23:58:43	-- scripts/common.sh@365 -- # decimal 2
00:25:12.735     23:58:43	-- scripts/common.sh@352 -- # local d=2
00:25:12.735     23:58:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:12.735     23:58:43	-- scripts/common.sh@354 -- # echo 2
00:25:12.735    23:58:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:12.735    23:58:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:12.735    23:58:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:12.735    23:58:43	-- scripts/common.sh@367 -- # return 0
00:25:12.735    23:58:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:12.735    23:58:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:12.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:12.735  		--rc genhtml_branch_coverage=1
00:25:12.735  		--rc genhtml_function_coverage=1
00:25:12.735  		--rc genhtml_legend=1
00:25:12.735  		--rc geninfo_all_blocks=1
00:25:12.735  		--rc geninfo_unexecuted_blocks=1
00:25:12.735  		
00:25:12.735  		'
00:25:12.735    23:58:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:12.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:12.735  		--rc genhtml_branch_coverage=1
00:25:12.735  		--rc genhtml_function_coverage=1
00:25:12.735  		--rc genhtml_legend=1
00:25:12.735  		--rc geninfo_all_blocks=1
00:25:12.735  		--rc geninfo_unexecuted_blocks=1
00:25:12.735  		
00:25:12.735  		'
00:25:12.735    23:58:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:12.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:12.735  		--rc genhtml_branch_coverage=1
00:25:12.735  		--rc genhtml_function_coverage=1
00:25:12.735  		--rc genhtml_legend=1
00:25:12.735  		--rc geninfo_all_blocks=1
00:25:12.735  		--rc geninfo_unexecuted_blocks=1
00:25:12.735  		
00:25:12.735  		'
00:25:12.735    23:58:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:12.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:12.735  		--rc genhtml_branch_coverage=1
00:25:12.735  		--rc genhtml_function_coverage=1
00:25:12.735  		--rc genhtml_legend=1
00:25:12.735  		--rc geninfo_all_blocks=1
00:25:12.735  		--rc geninfo_unexecuted_blocks=1
00:25:12.735  		
00:25:12.735  		'
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:25:12.735    23:58:43	-- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:25:12.735   23:58:43	-- bdevperf/common.sh@8 -- # local job_section=global
00:25:12.735   23:58:43	-- bdevperf/common.sh@9 -- # local rw=read
00:25:12.735   23:58:43	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:25:12.735   23:58:43	-- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:25:12.735   23:58:43	-- bdevperf/common.sh@13 -- # cat
00:25:12.735   23:58:43	-- bdevperf/common.sh@18 -- # job='[global]'
00:25:12.735  
00:25:12.735   23:58:43	-- bdevperf/common.sh@19 -- # echo
00:25:12.735   23:58:43	-- bdevperf/common.sh@20 -- # cat
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@18 -- # create_job job0
00:25:12.735   23:58:43	-- bdevperf/common.sh@8 -- # local job_section=job0
00:25:12.735   23:58:43	-- bdevperf/common.sh@9 -- # local rw=
00:25:12.735   23:58:43	-- bdevperf/common.sh@10 -- # local filename=
00:25:12.735   23:58:43	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:25:12.735   23:58:43	-- bdevperf/common.sh@18 -- # job='[job0]'
00:25:12.735  
00:25:12.735   23:58:43	-- bdevperf/common.sh@19 -- # echo
00:25:12.735   23:58:43	-- bdevperf/common.sh@20 -- # cat
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@19 -- # create_job job1
00:25:12.735   23:58:43	-- bdevperf/common.sh@8 -- # local job_section=job1
00:25:12.735   23:58:43	-- bdevperf/common.sh@9 -- # local rw=
00:25:12.735   23:58:43	-- bdevperf/common.sh@10 -- # local filename=
00:25:12.735   23:58:43	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:25:12.735   23:58:43	-- bdevperf/common.sh@18 -- # job='[job1]'
00:25:12.735  
00:25:12.735   23:58:43	-- bdevperf/common.sh@19 -- # echo
00:25:12.735   23:58:43	-- bdevperf/common.sh@20 -- # cat
00:25:12.735   23:58:43	-- bdevperf/test_config.sh@20 -- # create_job job2
00:25:12.735   23:58:43	-- bdevperf/common.sh@8 -- # local job_section=job2
00:25:12.735   23:58:43	-- bdevperf/common.sh@9 -- # local rw=
00:25:12.735   23:58:43	-- bdevperf/common.sh@10 -- # local filename=
00:25:12.736   23:58:43	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:25:12.736   23:58:43	-- bdevperf/common.sh@18 -- # job='[job2]'
00:25:12.736  
00:25:12.736   23:58:43	-- bdevperf/common.sh@19 -- # echo
00:25:12.736   23:58:43	-- bdevperf/common.sh@20 -- # cat
00:25:12.736   23:58:43	-- bdevperf/test_config.sh@21 -- # create_job job3
00:25:12.736   23:58:43	-- bdevperf/common.sh@8 -- # local job_section=job3
00:25:12.736   23:58:43	-- bdevperf/common.sh@9 -- # local rw=
00:25:12.736   23:58:43	-- bdevperf/common.sh@10 -- # local filename=
00:25:12.736   23:58:43	-- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:25:12.736   23:58:43	-- bdevperf/common.sh@18 -- # job='[job3]'
00:25:12.736  
00:25:12.736   23:58:43	-- bdevperf/common.sh@19 -- # echo
00:25:12.736   23:58:43	-- bdevperf/common.sh@20 -- # cat
00:25:12.736    23:58:43	-- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:16.923   23:58:47	-- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-12-13 23:58:43.344107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:16.923  [2024-12-13 23:58:43.344266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132240 ]
00:25:16.923  Using job config with 4 jobs
00:25:16.923  [2024-12-13 23:58:43.511320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:16.923  [2024-12-13 23:58:43.725514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:16.923  cpumask for '\''job0'\'' is too big
00:25:16.923  cpumask for '\''job1'\'' is too big
00:25:16.923  cpumask for '\''job2'\'' is too big
00:25:16.923  cpumask for '\''job3'\'' is too big
00:25:16.923  Running I/O for 2 seconds...
00:25:16.923  
00:25:16.923                                                                                                  Latency(us)
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.923  	 Malloc0             :       2.02   32892.90      32.12       0.00     0.00    7767.82    1534.14   24307.90
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.923  	 Malloc0             :       2.02   32902.30      32.13       0.00     0.00    7704.79    1392.64   10664.49
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.923  	 Malloc0             :       2.03   32940.51      32.17       0.00     0.00    7683.02    1429.88    9234.62
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.923  	 Malloc0             :       2.03   32919.26      32.15       0.00     0.00    7676.32    1422.43    7983.48
00:25:16.923  
[2024-12-13T23:58:47.655Z]  ===================================================================================================================
00:25:16.923  
[2024-12-13T23:58:47.655Z]  Total                       :             131654.97     128.57       0.00     0.00    7707.88    1392.64   24307.90'
00:25:16.923    23:58:47	-- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-12-13 23:58:43.344107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:16.923  [2024-12-13 23:58:43.344266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132240 ]
00:25:16.923  Using job config with 4 jobs
00:25:16.923  [2024-12-13 23:58:43.511320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:16.923  [2024-12-13 23:58:43.725514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:16.923  cpumask for '\''job0'\'' is too big
00:25:16.923  cpumask for '\''job1'\'' is too big
00:25:16.924  cpumask for '\''job2'\'' is too big
00:25:16.924  cpumask for '\''job3'\'' is too big
00:25:16.924  Running I/O for 2 seconds...
00:25:16.924  
00:25:16.924                                                                                                  Latency(us)
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.02   32892.90      32.12       0.00     0.00    7767.82    1534.14   24307.90
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.02   32902.30      32.13       0.00     0.00    7704.79    1392.64   10664.49
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.03   32940.51      32.17       0.00     0.00    7683.02    1429.88    9234.62
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.03   32919.26      32.15       0.00     0.00    7676.32    1422.43    7983.48
00:25:16.924  
[2024-12-13T23:58:47.656Z]  ===================================================================================================================
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Total                       :             131654.97     128.57       0.00     0.00    7707.88    1392.64   24307.90'
00:25:16.924    23:58:47	-- bdevperf/common.sh@32 -- # echo '[2024-12-13 23:58:43.344107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:16.924  [2024-12-13 23:58:43.344266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132240 ]
00:25:16.924  Using job config with 4 jobs
00:25:16.924  [2024-12-13 23:58:43.511320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:16.924  [2024-12-13 23:58:43.725514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:16.924  cpumask for '\''job0'\'' is too big
00:25:16.924  cpumask for '\''job1'\'' is too big
00:25:16.924  cpumask for '\''job2'\'' is too big
00:25:16.924  cpumask for '\''job3'\'' is too big
00:25:16.924  Running I/O for 2 seconds...
00:25:16.924  
00:25:16.924                                                                                                  Latency(us)
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.02   32892.90      32.12       0.00     0.00    7767.82    1534.14   24307.90
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.02   32902.30      32.13       0.00     0.00    7704.79    1392.64   10664.49
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.03   32940.51      32.17       0.00     0.00    7683.02    1429.88    9234.62
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:16.924  	 Malloc0             :       2.03   32919.26      32.15       0.00     0.00    7676.32    1422.43    7983.48
00:25:16.924  
[2024-12-13T23:58:47.656Z]  ===================================================================================================================
00:25:16.924  
[2024-12-13T23:58:47.656Z]  Total                       :             131654.97     128.57       0.00     0.00    7707.88    1392.64   24307.90'
00:25:16.924    23:58:47	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:25:16.924    23:58:47	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:25:16.924   23:58:47	-- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:25:16.924    23:58:47	-- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:16.924  [2024-12-13 23:58:47.511450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:16.924  [2024-12-13 23:58:47.511659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132295 ]
00:25:17.183  [2024-12-13 23:58:47.679113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:17.183  [2024-12-13 23:58:47.894863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:17.750  cpumask for 'job0' is too big
00:25:17.750  cpumask for 'job1' is too big
00:25:17.750  cpumask for 'job2' is too big
00:25:17.750  cpumask for 'job3' is too big
00:25:21.038   23:58:51	-- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:25:21.038  Running I/O for 2 seconds...
00:25:21.038  
00:25:21.038                                                                                                  Latency(us)
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:21.038  	 Malloc0             :       2.01   33217.46      32.44       0.00     0.00    7704.15    1489.45   11975.21
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:21.038  	 Malloc0             :       2.01   33195.22      32.42       0.00     0.00    7695.92    1385.19   10604.92
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:21.038  	 Malloc0             :       2.01   33173.63      32.40       0.00     0.00    7688.11    1414.98    9175.04
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:25:21.038  	 Malloc0             :       2.02   33245.72      32.47       0.00     0.00    7658.56     700.04    8043.05
00:25:21.038  
[2024-12-13T23:58:51.770Z]  ===================================================================================================================
00:25:21.038  
[2024-12-13T23:58:51.770Z]  Total                       :             132832.03     129.72       0.00     0.00    7686.66     700.04   11975.21'
00:25:21.038   23:58:51	-- bdevperf/test_config.sh@27 -- # cleanup
00:25:21.038   23:58:51	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:21.038   23:58:51	-- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@8 -- # local job_section=job0
00:25:21.038   23:58:51	-- bdevperf/common.sh@9 -- # local rw=write
00:25:21.038   23:58:51	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:25:21.038   23:58:51	-- bdevperf/common.sh@18 -- # job='[job0]'
00:25:21.038  
00:25:21.038   23:58:51	-- bdevperf/common.sh@19 -- # echo
00:25:21.038   23:58:51	-- bdevperf/common.sh@20 -- # cat
00:25:21.038   23:58:51	-- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@8 -- # local job_section=job1
00:25:21.038   23:58:51	-- bdevperf/common.sh@9 -- # local rw=write
00:25:21.038   23:58:51	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:25:21.038   23:58:51	-- bdevperf/common.sh@18 -- # job='[job1]'
00:25:21.038  
00:25:21.038   23:58:51	-- bdevperf/common.sh@19 -- # echo
00:25:21.038   23:58:51	-- bdevperf/common.sh@20 -- # cat
00:25:21.038   23:58:51	-- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@8 -- # local job_section=job2
00:25:21.038   23:58:51	-- bdevperf/common.sh@9 -- # local rw=write
00:25:21.038   23:58:51	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:25:21.038   23:58:51	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:25:21.038   23:58:51	-- bdevperf/common.sh@18 -- # job='[job2]'
00:25:21.038  
00:25:21.038   23:58:51	-- bdevperf/common.sh@19 -- # echo
00:25:21.038   23:58:51	-- bdevperf/common.sh@20 -- # cat
00:25:21.038    23:58:51	-- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:25.259   23:58:55	-- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-12-13 23:58:51.673668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:25.259  [2024-12-13 23:58:51.673851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132350 ]
00:25:25.259  Using job config with 3 jobs
00:25:25.259  [2024-12-13 23:58:51.842653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.259  [2024-12-13 23:58:52.047787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:25.259  cpumask for '\''job0'\'' is too big
00:25:25.259  cpumask for '\''job1'\'' is too big
00:25:25.259  cpumask for '\''job2'\'' is too big
00:25:25.259  Running I/O for 2 seconds...
00:25:25.259  
00:25:25.259                                                                                                  Latency(us)
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44351.54      43.31       0.00     0.00    5767.44    1459.67    8698.41
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44322.15      43.28       0.00     0.00    5761.51    1407.53    7238.75
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44376.47      43.34       0.00     0.00    5744.95     692.60    6851.49
00:25:25.259  
[2024-12-13T23:58:55.991Z]  ===================================================================================================================
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Total                       :             133050.16     129.93       0.00     0.00    5757.95     692.60    8698.41'
00:25:25.259    23:58:55	-- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-12-13 23:58:51.673668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:25.259  [2024-12-13 23:58:51.673851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132350 ]
00:25:25.259  Using job config with 3 jobs
00:25:25.259  [2024-12-13 23:58:51.842653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.259  [2024-12-13 23:58:52.047787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:25.259  cpumask for '\''job0'\'' is too big
00:25:25.259  cpumask for '\''job1'\'' is too big
00:25:25.259  cpumask for '\''job2'\'' is too big
00:25:25.259  Running I/O for 2 seconds...
00:25:25.259  
00:25:25.259                                                                                                  Latency(us)
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44351.54      43.31       0.00     0.00    5767.44    1459.67    8698.41
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44322.15      43.28       0.00     0.00    5761.51    1407.53    7238.75
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44376.47      43.34       0.00     0.00    5744.95     692.60    6851.49
00:25:25.259  
[2024-12-13T23:58:55.991Z]  ===================================================================================================================
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Total                       :             133050.16     129.93       0.00     0.00    5757.95     692.60    8698.41'
00:25:25.259    23:58:55	-- bdevperf/common.sh@32 -- # echo '[2024-12-13 23:58:51.673668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:25.259  [2024-12-13 23:58:51.673851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132350 ]
00:25:25.259  Using job config with 3 jobs
00:25:25.259  [2024-12-13 23:58:51.842653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.259  [2024-12-13 23:58:52.047787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:25.259  cpumask for '\''job0'\'' is too big
00:25:25.259  cpumask for '\''job1'\'' is too big
00:25:25.259  cpumask for '\''job2'\'' is too big
00:25:25.259  Running I/O for 2 seconds...
00:25:25.259  
00:25:25.259                                                                                                  Latency(us)
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44351.54      43.31       0.00     0.00    5767.44    1459.67    8698.41
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44322.15      43.28       0.00     0.00    5761.51    1407.53    7238.75
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:25:25.259  	 Malloc0             :       2.01   44376.47      43.34       0.00     0.00    5744.95     692.60    6851.49
00:25:25.259  
[2024-12-13T23:58:55.991Z]  ===================================================================================================================
00:25:25.259  
[2024-12-13T23:58:55.991Z]  Total                       :             133050.16     129.93       0.00     0.00    5757.95     692.60    8698.41'
00:25:25.259    23:58:55	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:25:25.259    23:58:55	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:25:25.259   23:58:55	-- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:25:25.259   23:58:55	-- bdevperf/test_config.sh@35 -- # cleanup
00:25:25.259   23:58:55	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:25.259   23:58:55	-- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:25:25.259   23:58:55	-- bdevperf/common.sh@8 -- # local job_section=global
00:25:25.259   23:58:55	-- bdevperf/common.sh@9 -- # local rw=rw
00:25:25.259   23:58:55	-- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:25:25.259   23:58:55	-- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:25:25.259   23:58:55	-- bdevperf/common.sh@13 -- # cat
00:25:25.259   23:58:55	-- bdevperf/common.sh@18 -- # job='[global]'
00:25:25.259  
00:25:25.259   23:58:55	-- bdevperf/common.sh@19 -- # echo
00:25:25.260   23:58:55	-- bdevperf/common.sh@20 -- # cat
00:25:25.260   23:58:55	-- bdevperf/test_config.sh@38 -- # create_job job0
00:25:25.260   23:58:55	-- bdevperf/common.sh@8 -- # local job_section=job0
00:25:25.260   23:58:55	-- bdevperf/common.sh@9 -- # local rw=
00:25:25.260   23:58:55	-- bdevperf/common.sh@10 -- # local filename=
00:25:25.260   23:58:55	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:25:25.260   23:58:55	-- bdevperf/common.sh@18 -- # job='[job0]'
00:25:25.260  
00:25:25.260   23:58:55	-- bdevperf/common.sh@19 -- # echo
00:25:25.260   23:58:55	-- bdevperf/common.sh@20 -- # cat
00:25:25.260   23:58:55	-- bdevperf/test_config.sh@39 -- # create_job job1
00:25:25.260   23:58:55	-- bdevperf/common.sh@8 -- # local job_section=job1
00:25:25.260   23:58:55	-- bdevperf/common.sh@9 -- # local rw=
00:25:25.260   23:58:55	-- bdevperf/common.sh@10 -- # local filename=
00:25:25.260   23:58:55	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:25:25.260   23:58:55	-- bdevperf/common.sh@18 -- # job='[job1]'
00:25:25.260  
00:25:25.260   23:58:55	-- bdevperf/common.sh@19 -- # echo
00:25:25.260   23:58:55	-- bdevperf/common.sh@20 -- # cat
00:25:25.260   23:58:55	-- bdevperf/test_config.sh@40 -- # create_job job2
00:25:25.260   23:58:55	-- bdevperf/common.sh@8 -- # local job_section=job2
00:25:25.260   23:58:55	-- bdevperf/common.sh@9 -- # local rw=
00:25:25.260   23:58:55	-- bdevperf/common.sh@10 -- # local filename=
00:25:25.260   23:58:55	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:25:25.260   23:58:55	-- bdevperf/common.sh@18 -- # job='[job2]'
00:25:25.260  
00:25:25.260   23:58:55	-- bdevperf/common.sh@19 -- # echo
00:25:25.260   23:58:55	-- bdevperf/common.sh@20 -- # cat
00:25:25.260   23:58:55	-- bdevperf/test_config.sh@41 -- # create_job job3
00:25:25.260   23:58:55	-- bdevperf/common.sh@8 -- # local job_section=job3
00:25:25.260   23:58:55	-- bdevperf/common.sh@9 -- # local rw=
00:25:25.260   23:58:55	-- bdevperf/common.sh@10 -- # local filename=
00:25:25.260   23:58:55	-- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:25:25.260   23:58:55	-- bdevperf/common.sh@18 -- # job='[job3]'
00:25:25.260  
00:25:25.260   23:58:55	-- bdevperf/common.sh@19 -- # echo
00:25:25.260   23:58:55	-- bdevperf/common.sh@20 -- # cat
00:25:25.260    23:58:55	-- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:29.449   23:58:59	-- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-12-13 23:58:55.833340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:29.449  [2024-12-13 23:58:55.833525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132411 ]
00:25:29.449  Using job config with 4 jobs
00:25:29.449  [2024-12-13 23:58:55.999732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:29.449  [2024-12-13 23:58:56.203930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:29.449  cpumask for '\''job0'\'' is too big
00:25:29.449  cpumask for '\''job1'\'' is too big
00:25:29.449  cpumask for '\''job2'\'' is too big
00:25:29.449  cpumask for '\''job3'\'' is too big
00:25:29.449  Running I/O for 2 seconds...
00:25:29.449  
00:25:29.449                                                                                                  Latency(us)
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16334.88      15.95       0.00     0.00   15657.57    3038.49   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.02   16323.60      15.94       0.00     0.00   15655.16    3544.90   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16312.91      15.93       0.00     0.00   15624.73    2874.65   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.03   16302.01      15.92       0.00     0.00   15623.10    3425.75   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16353.52      15.97       0.00     0.00   15532.04    2978.91   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16342.59      15.96       0.00     0.00   15530.17    3485.32   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16332.09      15.95       0.00     0.00   15499.09    2964.01   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16321.15      15.94       0.00     0.00   15498.60    3440.64   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  ===================================================================================================================
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Total                       :             130622.75     127.56       0.00     0.00   15577.32    2874.65   24665.37'
00:25:29.449    23:58:59	-- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-12-13 23:58:55.833340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:29.449  [2024-12-13 23:58:55.833525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132411 ]
00:25:29.449  Using job config with 4 jobs
00:25:29.449  [2024-12-13 23:58:55.999732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:29.449  [2024-12-13 23:58:56.203930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:29.449  cpumask for '\''job0'\'' is too big
00:25:29.449  cpumask for '\''job1'\'' is too big
00:25:29.449  cpumask for '\''job2'\'' is too big
00:25:29.449  cpumask for '\''job3'\'' is too big
00:25:29.449  Running I/O for 2 seconds...
00:25:29.449  
00:25:29.449                                                                                                  Latency(us)
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16334.88      15.95       0.00     0.00   15657.57    3038.49   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.02   16323.60      15.94       0.00     0.00   15655.16    3544.90   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16312.91      15.93       0.00     0.00   15624.73    2874.65   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.03   16302.01      15.92       0.00     0.00   15623.10    3425.75   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16353.52      15.97       0.00     0.00   15532.04    2978.91   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16342.59      15.96       0.00     0.00   15530.17    3485.32   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16332.09      15.95       0.00     0.00   15499.09    2964.01   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16321.15      15.94       0.00     0.00   15498.60    3440.64   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  ===================================================================================================================
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Total                       :             130622.75     127.56       0.00     0.00   15577.32    2874.65   24665.37'
00:25:29.449    23:58:59	-- bdevperf/common.sh@32 -- # echo '[2024-12-13 23:58:55.833340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:29.449  [2024-12-13 23:58:55.833525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132411 ]
00:25:29.449  Using job config with 4 jobs
00:25:29.449  [2024-12-13 23:58:55.999732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:29.449  [2024-12-13 23:58:56.203930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:29.449  cpumask for '\''job0'\'' is too big
00:25:29.449  cpumask for '\''job1'\'' is too big
00:25:29.449  cpumask for '\''job2'\'' is too big
00:25:29.449  cpumask for '\''job3'\'' is too big
00:25:29.449  Running I/O for 2 seconds...
00:25:29.449  
00:25:29.449                                                                                                  Latency(us)
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16334.88      15.95       0.00     0.00   15657.57    3038.49   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.02   16323.60      15.94       0.00     0.00   15655.16    3544.90   24665.37
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.02   16312.91      15.93       0.00     0.00   15624.73    2874.65   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.03   16302.01      15.92       0.00     0.00   15623.10    3425.75   21686.46
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16353.52      15.97       0.00     0.00   15532.04    2978.91   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16342.59      15.96       0.00     0.00   15530.17    3485.32   18707.55
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc0             :       2.04   16332.09      15.95       0.00     0.00   15499.09    2964.01   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:25:29.449  	 Malloc1             :       2.04   16321.15      15.94       0.00     0.00   15498.60    3440.64   16324.42
00:25:29.449  
[2024-12-13T23:59:00.181Z]  ===================================================================================================================
00:25:29.449  
[2024-12-13T23:59:00.181Z]  Total                       :             130622.75     127.56       0.00     0.00   15577.32    2874.65   24665.37'
00:25:29.449    23:58:59	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:25:29.449    23:58:59	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:25:29.449   23:58:59	-- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:25:29.449   23:58:59	-- bdevperf/test_config.sh@44 -- # cleanup
00:25:29.449   23:58:59	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:25:29.449   23:58:59	-- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:25:29.449  ************************************
00:25:29.449  END TEST bdevperf_config
00:25:29.449  ************************************
00:25:29.449  
00:25:29.449  real	0m16.859s
00:25:29.449  user	0m14.889s
00:25:29.449  sys	0m1.419s
00:25:29.449   23:58:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:29.449   23:58:59	-- common/autotest_common.sh@10 -- # set +x
00:25:29.449    23:58:59	-- spdk/autotest.sh@185 -- # uname -s
00:25:29.450   23:58:59	-- spdk/autotest.sh@185 -- # [[ Linux == Linux ]]
00:25:29.450   23:58:59	-- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:25:29.450   23:58:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:29.450   23:58:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:29.450   23:58:59	-- common/autotest_common.sh@10 -- # set +x
00:25:29.450  ************************************
00:25:29.450  START TEST reactor_set_interrupt
00:25:29.450  ************************************
00:25:29.450   23:58:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:25:29.450  * Looking for test storage...
00:25:29.450  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.450    23:59:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:29.450     23:59:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:29.450     23:59:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:29.450    23:59:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:29.450    23:59:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:29.450    23:59:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:29.450    23:59:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:29.450    23:59:00	-- scripts/common.sh@335 -- # IFS=.-:
00:25:29.450    23:59:00	-- scripts/common.sh@335 -- # read -ra ver1
00:25:29.450    23:59:00	-- scripts/common.sh@336 -- # IFS=.-:
00:25:29.450    23:59:00	-- scripts/common.sh@336 -- # read -ra ver2
00:25:29.450    23:59:00	-- scripts/common.sh@337 -- # local 'op=<'
00:25:29.450    23:59:00	-- scripts/common.sh@339 -- # ver1_l=2
00:25:29.450    23:59:00	-- scripts/common.sh@340 -- # ver2_l=1
00:25:29.450    23:59:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:29.450    23:59:00	-- scripts/common.sh@343 -- # case "$op" in
00:25:29.450    23:59:00	-- scripts/common.sh@344 -- # : 1
00:25:29.450    23:59:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:29.450    23:59:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:29.450     23:59:00	-- scripts/common.sh@364 -- # decimal 1
00:25:29.450     23:59:00	-- scripts/common.sh@352 -- # local d=1
00:25:29.450     23:59:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:29.450     23:59:00	-- scripts/common.sh@354 -- # echo 1
00:25:29.450    23:59:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:29.450     23:59:00	-- scripts/common.sh@365 -- # decimal 2
00:25:29.450     23:59:00	-- scripts/common.sh@352 -- # local d=2
00:25:29.450     23:59:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:29.450     23:59:00	-- scripts/common.sh@354 -- # echo 2
00:25:29.450    23:59:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:29.450    23:59:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:29.450    23:59:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:29.450    23:59:00	-- scripts/common.sh@367 -- # return 0
00:25:29.450    23:59:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:29.450    23:59:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:29.450  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.450  		--rc genhtml_branch_coverage=1
00:25:29.450  		--rc genhtml_function_coverage=1
00:25:29.450  		--rc genhtml_legend=1
00:25:29.450  		--rc geninfo_all_blocks=1
00:25:29.450  		--rc geninfo_unexecuted_blocks=1
00:25:29.450  		
00:25:29.450  		'
00:25:29.450    23:59:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:29.450  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.450  		--rc genhtml_branch_coverage=1
00:25:29.450  		--rc genhtml_function_coverage=1
00:25:29.450  		--rc genhtml_legend=1
00:25:29.450  		--rc geninfo_all_blocks=1
00:25:29.450  		--rc geninfo_unexecuted_blocks=1
00:25:29.450  		
00:25:29.450  		'
00:25:29.450    23:59:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:29.450  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.450  		--rc genhtml_branch_coverage=1
00:25:29.450  		--rc genhtml_function_coverage=1
00:25:29.450  		--rc genhtml_legend=1
00:25:29.450  		--rc geninfo_all_blocks=1
00:25:29.450  		--rc geninfo_unexecuted_blocks=1
00:25:29.450  		
00:25:29.450  		'
00:25:29.450    23:59:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:29.450  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.450  		--rc genhtml_branch_coverage=1
00:25:29.450  		--rc genhtml_function_coverage=1
00:25:29.450  		--rc genhtml_legend=1
00:25:29.450  		--rc geninfo_all_blocks=1
00:25:29.450  		--rc geninfo_unexecuted_blocks=1
00:25:29.450  		
00:25:29.450  		'
00:25:29.450   23:59:00	-- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:25:29.450      23:59:00	-- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:25:29.450     23:59:00	-- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.711    23:59:00	-- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.711     23:59:00	-- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:25:29.711    23:59:00	-- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:25:29.711    23:59:00	-- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:25:29.711     23:59:00	-- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:25:29.711     23:59:00	-- common/autotest_common.sh@34 -- # set -e
00:25:29.711     23:59:00	-- common/autotest_common.sh@35 -- # shopt -s nullglob
00:25:29.711     23:59:00	-- common/autotest_common.sh@36 -- # shopt -s extglob
00:25:29.711     23:59:00	-- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:25:29.711     23:59:00	-- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:25:29.711      23:59:00	-- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:25:29.711      23:59:00	-- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:25:29.711      23:59:00	-- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:25:29.711      23:59:00	-- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:25:29.711      23:59:00	-- common/build_config.sh@5 -- # CONFIG_USDT=n
00:25:29.711      23:59:00	-- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:25:29.711      23:59:00	-- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:25:29.711      23:59:00	-- common/build_config.sh@8 -- # CONFIG_RBD=n
00:25:29.711      23:59:00	-- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:25:29.711      23:59:00	-- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:25:29.711      23:59:00	-- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:25:29.711      23:59:00	-- common/build_config.sh@12 -- # CONFIG_SMA=n
00:25:29.711      23:59:00	-- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:25:29.711      23:59:00	-- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:25:29.711      23:59:00	-- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:25:29.711      23:59:00	-- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:25:29.711      23:59:00	-- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n
00:25:29.711      23:59:00	-- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:25:29.711      23:59:00	-- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:29.711      23:59:00	-- common/build_config.sh@20 -- # CONFIG_LTO=n
00:25:29.711      23:59:00	-- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y
00:25:29.711      23:59:00	-- common/build_config.sh@22 -- # CONFIG_CET=n
00:25:29.711      23:59:00	-- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:25:29.711      23:59:00	-- common/build_config.sh@24 -- # CONFIG_OCF_PATH=
00:25:29.711      23:59:00	-- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y
00:25:29.711      23:59:00	-- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n
00:25:29.711      23:59:00	-- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n
00:25:29.711      23:59:00	-- common/build_config.sh@28 -- # CONFIG_UBLK=n
00:25:29.711      23:59:00	-- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y
00:25:29.711      23:59:00	-- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH=
00:25:29.711      23:59:00	-- common/build_config.sh@31 -- # CONFIG_OCF=n
00:25:29.711      23:59:00	-- common/build_config.sh@32 -- # CONFIG_FUSE=n
00:25:29.711      23:59:00	-- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR=
00:25:29.711      23:59:00	-- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=
00:25:29.711      23:59:00	-- common/build_config.sh@35 -- # CONFIG_FUZZER=n
00:25:29.711      23:59:00	-- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:25:29.711      23:59:00	-- common/build_config.sh@37 -- # CONFIG_CRYPTO=n
00:25:29.711      23:59:00	-- common/build_config.sh@38 -- # CONFIG_PGO_USE=n
00:25:29.711      23:59:00	-- common/build_config.sh@39 -- # CONFIG_VHOST=y
00:25:29.711      23:59:00	-- common/build_config.sh@40 -- # CONFIG_DAOS=n
00:25:29.711      23:59:00	-- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=
00:25:29.711      23:59:00	-- common/build_config.sh@42 -- # CONFIG_DAOS_DIR=
00:25:29.711      23:59:00	-- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y
00:25:29.711      23:59:00	-- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:25:29.711      23:59:00	-- common/build_config.sh@45 -- # CONFIG_VIRTIO=y
00:25:29.711      23:59:00	-- common/build_config.sh@46 -- # CONFIG_COVERAGE=y
00:25:29.711      23:59:00	-- common/build_config.sh@47 -- # CONFIG_RDMA=y
00:25:29.711      23:59:00	-- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:25:29.711      23:59:00	-- common/build_config.sh@49 -- # CONFIG_URING_PATH=
00:25:29.711      23:59:00	-- common/build_config.sh@50 -- # CONFIG_XNVME=n
00:25:29.712      23:59:00	-- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n
00:25:29.712      23:59:00	-- common/build_config.sh@52 -- # CONFIG_ARCH=native
00:25:29.712      23:59:00	-- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n
00:25:29.712      23:59:00	-- common/build_config.sh@54 -- # CONFIG_WERROR=y
00:25:29.712      23:59:00	-- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n
00:25:29.712      23:59:00	-- common/build_config.sh@56 -- # CONFIG_UBSAN=y
00:25:29.712      23:59:00	-- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=
00:25:29.712      23:59:00	-- common/build_config.sh@58 -- # CONFIG_GOLANG=n
00:25:29.712      23:59:00	-- common/build_config.sh@59 -- # CONFIG_ISAL=y
00:25:29.712      23:59:00	-- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n
00:25:29.712      23:59:00	-- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=
00:25:29.712      23:59:00	-- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs
00:25:29.712      23:59:00	-- common/build_config.sh@63 -- # CONFIG_APPS=y
00:25:29.712      23:59:00	-- common/build_config.sh@64 -- # CONFIG_SHARED=n
00:25:29.712      23:59:00	-- common/build_config.sh@65 -- # CONFIG_FC_PATH=
00:25:29.712      23:59:00	-- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n
00:25:29.712      23:59:00	-- common/build_config.sh@67 -- # CONFIG_FC=n
00:25:29.712      23:59:00	-- common/build_config.sh@68 -- # CONFIG_AVAHI=n
00:25:29.712      23:59:00	-- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y
00:25:29.712      23:59:00	-- common/build_config.sh@70 -- # CONFIG_RAID5F=y
00:25:29.712      23:59:00	-- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y
00:25:29.712      23:59:00	-- common/build_config.sh@72 -- # CONFIG_TESTS=y
00:25:29.712      23:59:00	-- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n
00:25:29.712      23:59:00	-- common/build_config.sh@74 -- # CONFIG_MAX_LCORES=
00:25:29.712      23:59:00	-- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n
00:25:29.712      23:59:00	-- common/build_config.sh@76 -- # CONFIG_DEBUG=y
00:25:29.712      23:59:00	-- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n
00:25:29.712      23:59:00	-- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX=
00:25:29.712      23:59:00	-- common/build_config.sh@79 -- # CONFIG_URING=n
00:25:29.712     23:59:00	-- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:29.712        23:59:00	-- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:29.712       23:59:00	-- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:25:29.712      23:59:00	-- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:25:29.712      23:59:00	-- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:25:29.712      23:59:00	-- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:25:29.712      23:59:00	-- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:25:29.712      23:59:00	-- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:25:29.712      23:59:00	-- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:25:29.712      23:59:00	-- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:25:29.712      23:59:00	-- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:25:29.712      23:59:00	-- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:25:29.712      23:59:00	-- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:25:29.712      23:59:00	-- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:25:29.712      23:59:00	-- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:25:29.712      23:59:00	-- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:25:29.712  #define SPDK_CONFIG_H
00:25:29.712  #define SPDK_CONFIG_APPS 1
00:25:29.712  #define SPDK_CONFIG_ARCH native
00:25:29.712  #define SPDK_CONFIG_ASAN 1
00:25:29.712  #undef SPDK_CONFIG_AVAHI
00:25:29.712  #undef SPDK_CONFIG_CET
00:25:29.712  #define SPDK_CONFIG_COVERAGE 1
00:25:29.712  #define SPDK_CONFIG_CROSS_PREFIX 
00:25:29.712  #undef SPDK_CONFIG_CRYPTO
00:25:29.712  #undef SPDK_CONFIG_CRYPTO_MLX5
00:25:29.712  #undef SPDK_CONFIG_CUSTOMOCF
00:25:29.712  #undef SPDK_CONFIG_DAOS
00:25:29.712  #define SPDK_CONFIG_DAOS_DIR 
00:25:29.712  #define SPDK_CONFIG_DEBUG 1
00:25:29.712  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:25:29.712  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:25:29.712  #define SPDK_CONFIG_DPDK_INC_DIR 
00:25:29.712  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:25:29.712  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:25:29.712  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:29.712  #define SPDK_CONFIG_EXAMPLES 1
00:25:29.712  #undef SPDK_CONFIG_FC
00:25:29.712  #define SPDK_CONFIG_FC_PATH 
00:25:29.712  #define SPDK_CONFIG_FIO_PLUGIN 1
00:25:29.712  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:25:29.712  #undef SPDK_CONFIG_FUSE
00:25:29.712  #undef SPDK_CONFIG_FUZZER
00:25:29.712  #define SPDK_CONFIG_FUZZER_LIB 
00:25:29.712  #undef SPDK_CONFIG_GOLANG
00:25:29.712  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:25:29.712  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:25:29.712  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:25:29.712  #undef SPDK_CONFIG_HAVE_LIBBSD
00:25:29.712  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:25:29.712  #define SPDK_CONFIG_IDXD 1
00:25:29.712  #undef SPDK_CONFIG_IDXD_KERNEL
00:25:29.712  #undef SPDK_CONFIG_IPSEC_MB
00:25:29.712  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:25:29.712  #define SPDK_CONFIG_ISAL 1
00:25:29.712  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:25:29.712  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:25:29.712  #define SPDK_CONFIG_LIBDIR 
00:25:29.712  #undef SPDK_CONFIG_LTO
00:25:29.712  #define SPDK_CONFIG_MAX_LCORES 
00:25:29.712  #define SPDK_CONFIG_NVME_CUSE 1
00:25:29.712  #undef SPDK_CONFIG_OCF
00:25:29.712  #define SPDK_CONFIG_OCF_PATH 
00:25:29.712  #define SPDK_CONFIG_OPENSSL_PATH 
00:25:29.712  #undef SPDK_CONFIG_PGO_CAPTURE
00:25:29.712  #undef SPDK_CONFIG_PGO_USE
00:25:29.712  #define SPDK_CONFIG_PREFIX /usr/local
00:25:29.712  #define SPDK_CONFIG_RAID5F 1
00:25:29.712  #undef SPDK_CONFIG_RBD
00:25:29.712  #define SPDK_CONFIG_RDMA 1
00:25:29.712  #define SPDK_CONFIG_RDMA_PROV verbs
00:25:29.712  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:25:29.712  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:25:29.712  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:25:29.712  #undef SPDK_CONFIG_SHARED
00:25:29.712  #undef SPDK_CONFIG_SMA
00:25:29.712  #define SPDK_CONFIG_TESTS 1
00:25:29.712  #undef SPDK_CONFIG_TSAN
00:25:29.712  #undef SPDK_CONFIG_UBLK
00:25:29.712  #define SPDK_CONFIG_UBSAN 1
00:25:29.712  #define SPDK_CONFIG_UNIT_TESTS 1
00:25:29.712  #undef SPDK_CONFIG_URING
00:25:29.712  #define SPDK_CONFIG_URING_PATH 
00:25:29.712  #undef SPDK_CONFIG_URING_ZNS
00:25:29.712  #undef SPDK_CONFIG_USDT
00:25:29.712  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:25:29.712  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:25:29.712  #undef SPDK_CONFIG_VFIO_USER
00:25:29.712  #define SPDK_CONFIG_VFIO_USER_DIR 
00:25:29.712  #define SPDK_CONFIG_VHOST 1
00:25:29.712  #define SPDK_CONFIG_VIRTIO 1
00:25:29.712  #undef SPDK_CONFIG_VTUNE
00:25:29.712  #define SPDK_CONFIG_VTUNE_DIR 
00:25:29.712  #define SPDK_CONFIG_WERROR 1
00:25:29.712  #define SPDK_CONFIG_WPDK_DIR 
00:25:29.712  #undef SPDK_CONFIG_XNVME
00:25:29.712  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:25:29.712      23:59:00	-- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:25:29.712     23:59:00	-- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:29.712      23:59:00	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:29.712      23:59:00	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:29.712      23:59:00	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:29.712       23:59:00	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:29.712       23:59:00	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:29.712       23:59:00	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:29.712       23:59:00	-- paths/export.sh@5 -- # export PATH
00:25:29.712       23:59:00	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:29.712     23:59:00	-- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:29.712        23:59:00	-- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:29.712       23:59:00	-- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:29.712      23:59:00	-- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:29.712       23:59:00	-- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:25:29.712      23:59:00	-- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:25:29.712      23:59:00	-- pm/common@16 -- # TEST_TAG=N/A
00:25:29.712      23:59:00	-- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:25:29.712     23:59:00	-- common/autotest_common.sh@52 -- # : 1
00:25:29.712     23:59:00	-- common/autotest_common.sh@53 -- # export RUN_NIGHTLY
00:25:29.712     23:59:00	-- common/autotest_common.sh@56 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:25:29.712     23:59:00	-- common/autotest_common.sh@58 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND
00:25:29.712     23:59:00	-- common/autotest_common.sh@60 -- # : 1
00:25:29.712     23:59:00	-- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:25:29.712     23:59:00	-- common/autotest_common.sh@62 -- # : 1
00:25:29.712     23:59:00	-- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST
00:25:29.712     23:59:00	-- common/autotest_common.sh@64 -- # :
00:25:29.712     23:59:00	-- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD
00:25:29.712     23:59:00	-- common/autotest_common.sh@66 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD
00:25:29.712     23:59:00	-- common/autotest_common.sh@68 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL
00:25:29.712     23:59:00	-- common/autotest_common.sh@70 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI
00:25:29.712     23:59:00	-- common/autotest_common.sh@72 -- # : 0
00:25:29.712     23:59:00	-- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR
00:25:29.712     23:59:00	-- common/autotest_common.sh@74 -- # : 1
00:25:29.712     23:59:00	-- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME
00:25:29.713     23:59:00	-- common/autotest_common.sh@76 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR
00:25:29.713     23:59:00	-- common/autotest_common.sh@78 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP
00:25:29.713     23:59:00	-- common/autotest_common.sh@80 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI
00:25:29.713     23:59:00	-- common/autotest_common.sh@82 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE
00:25:29.713     23:59:00	-- common/autotest_common.sh@84 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP
00:25:29.713     23:59:00	-- common/autotest_common.sh@86 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF
00:25:29.713     23:59:00	-- common/autotest_common.sh@88 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER
00:25:29.713     23:59:00	-- common/autotest_common.sh@90 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU
00:25:29.713     23:59:00	-- common/autotest_common.sh@92 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER
00:25:29.713     23:59:00	-- common/autotest_common.sh@94 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT
00:25:29.713     23:59:00	-- common/autotest_common.sh@96 -- # : rdma
00:25:29.713     23:59:00	-- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT
00:25:29.713     23:59:00	-- common/autotest_common.sh@98 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD
00:25:29.713     23:59:00	-- common/autotest_common.sh@100 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST
00:25:29.713     23:59:00	-- common/autotest_common.sh@102 -- # : 1
00:25:29.713     23:59:00	-- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV
00:25:29.713     23:59:00	-- common/autotest_common.sh@104 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT
00:25:29.713     23:59:00	-- common/autotest_common.sh@106 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS
00:25:29.713     23:59:00	-- common/autotest_common.sh@108 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT
00:25:29.713     23:59:00	-- common/autotest_common.sh@110 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL
00:25:29.713     23:59:00	-- common/autotest_common.sh@112 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS
00:25:29.713     23:59:00	-- common/autotest_common.sh@114 -- # : 1
00:25:29.713     23:59:00	-- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN
00:25:29.713     23:59:00	-- common/autotest_common.sh@116 -- # : 1
00:25:29.713     23:59:00	-- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN
00:25:29.713     23:59:00	-- common/autotest_common.sh@118 -- # :
00:25:29.713     23:59:00	-- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK
00:25:29.713     23:59:00	-- common/autotest_common.sh@120 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT
00:25:29.713     23:59:00	-- common/autotest_common.sh@122 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO
00:25:29.713     23:59:00	-- common/autotest_common.sh@124 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL
00:25:29.713     23:59:00	-- common/autotest_common.sh@126 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF
00:25:29.713     23:59:00	-- common/autotest_common.sh@128 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD
00:25:29.713     23:59:00	-- common/autotest_common.sh@130 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL
00:25:29.713     23:59:00	-- common/autotest_common.sh@132 -- # :
00:25:29.713     23:59:00	-- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK
00:25:29.713     23:59:00	-- common/autotest_common.sh@134 -- # : true
00:25:29.713     23:59:00	-- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X
00:25:29.713     23:59:00	-- common/autotest_common.sh@136 -- # : 1
00:25:29.713     23:59:00	-- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5
00:25:29.713     23:59:00	-- common/autotest_common.sh@138 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@139 -- # export SPDK_TEST_URING
00:25:29.713     23:59:00	-- common/autotest_common.sh@140 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT
00:25:29.713     23:59:00	-- common/autotest_common.sh@142 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO
00:25:29.713     23:59:00	-- common/autotest_common.sh@144 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER
00:25:29.713     23:59:00	-- common/autotest_common.sh@146 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD
00:25:29.713     23:59:00	-- common/autotest_common.sh@148 -- # :
00:25:29.713     23:59:00	-- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS
00:25:29.713     23:59:00	-- common/autotest_common.sh@150 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA
00:25:29.713     23:59:00	-- common/autotest_common.sh@152 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS
00:25:29.713     23:59:00	-- common/autotest_common.sh@154 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME
00:25:29.713     23:59:00	-- common/autotest_common.sh@156 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA
00:25:29.713     23:59:00	-- common/autotest_common.sh@158 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA
00:25:29.713     23:59:00	-- common/autotest_common.sh@160 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT
00:25:29.713     23:59:00	-- common/autotest_common.sh@163 -- # :
00:25:29.713     23:59:00	-- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET
00:25:29.713     23:59:00	-- common/autotest_common.sh@165 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS
00:25:29.713     23:59:00	-- common/autotest_common.sh@167 -- # : 0
00:25:29.713     23:59:00	-- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT
00:25:29.713     23:59:00	-- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:29.713     23:59:00	-- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:25:29.713     23:59:00	-- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:25:29.713     23:59:00	-- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:29.713     23:59:00	-- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:29.713     23:59:00	-- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1
00:25:29.713     23:59:00	-- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1
00:25:29.713     23:59:00	-- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:29.713     23:59:00	-- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:29.713     23:59:00	-- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:29.713     23:59:00	-- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:29.713     23:59:00	-- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:25:29.713     23:59:00	-- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file
00:25:29.713     23:59:00	-- common/autotest_common.sh@196 -- # cat
00:25:29.713     23:59:00	-- common/autotest_common.sh@222 -- # echo leak:libfuse3.so
00:25:29.713     23:59:00	-- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:29.713     23:59:00	-- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:29.713     23:59:00	-- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:29.713     23:59:00	-- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:29.713     23:59:00	-- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']'
00:25:29.713     23:59:00	-- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR
00:25:29.713     23:59:00	-- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:29.713     23:59:00	-- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:29.713     23:59:00	-- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:29.713     23:59:00	-- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:29.713     23:59:00	-- common/autotest_common.sh@239 -- # export QEMU_BIN=
00:25:29.713     23:59:00	-- common/autotest_common.sh@239 -- # QEMU_BIN=
00:25:29.713     23:59:00	-- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:29.713     23:59:00	-- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:29.713     23:59:00	-- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:29.713     23:59:00	-- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:29.713     23:59:00	-- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:29.713     23:59:00	-- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:29.714     23:59:00	-- common/autotest_common.sh@247 -- # _LCOV_MAIN=0
00:25:29.714     23:59:00	-- common/autotest_common.sh@248 -- # _LCOV_LLVM=1
00:25:29.714     23:59:00	-- common/autotest_common.sh@249 -- # _LCOV=
00:25:29.714     23:59:00	-- common/autotest_common.sh@250 -- # [[ '' == *clang* ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:25:29.714     23:59:00	-- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]=
00:25:29.714     23:59:00	-- common/autotest_common.sh@255 -- # lcov_opt=
00:25:29.714     23:59:00	-- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']'
00:25:29.714     23:59:00	-- common/autotest_common.sh@259 -- # export valgrind=
00:25:29.714     23:59:00	-- common/autotest_common.sh@259 -- # valgrind=
00:25:29.714      23:59:00	-- common/autotest_common.sh@265 -- # uname -s
00:25:29.714     23:59:00	-- common/autotest_common.sh@265 -- # '[' Linux = Linux ']'
00:25:29.714     23:59:00	-- common/autotest_common.sh@266 -- # HUGEMEM=4096
00:25:29.714     23:59:00	-- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes
00:25:29.714     23:59:00	-- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes
00:25:29.714     23:59:00	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@275 -- # MAKE=make
00:25:29.714     23:59:00	-- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10
00:25:29.714     23:59:00	-- common/autotest_common.sh@292 -- # export HUGEMEM=4096
00:25:29.714     23:59:00	-- common/autotest_common.sh@292 -- # HUGEMEM=4096
00:25:29.714     23:59:00	-- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:25:29.714     23:59:00	-- common/autotest_common.sh@299 -- # NO_HUGE=()
00:25:29.714     23:59:00	-- common/autotest_common.sh@300 -- # TEST_MODE=
00:25:29.714     23:59:00	-- common/autotest_common.sh@319 -- # [[ -z 132503 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@319 -- # kill -0 132503
00:25:29.714     23:59:00	-- common/autotest_common.sh@1675 -- # set_test_storage 2147483648
00:25:29.714     23:59:00	-- common/autotest_common.sh@329 -- # [[ -v testdir ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@331 -- # local requested_size=2147483648
00:25:29.714     23:59:00	-- common/autotest_common.sh@332 -- # local mount target_dir
00:25:29.714     23:59:00	-- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses
00:25:29.714     23:59:00	-- common/autotest_common.sh@335 -- # local source fs size avail mount use
00:25:29.714     23:59:00	-- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates
00:25:29.714      23:59:00	-- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX
00:25:29.714     23:59:00	-- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.wkgfH8
00:25:29.714     23:59:00	-- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:25:29.714     23:59:00	-- common/autotest_common.sh@346 -- # [[ -n '' ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@351 -- # [[ -n '' ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.wkgfH8/tests/interrupt /tmp/spdk.wkgfH8
00:25:29.714     23:59:00	-- common/autotest_common.sh@359 -- # requested_size=2214592512
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714      23:59:00	-- common/autotest_common.sh@328 -- # df -T
00:25:29.714      23:59:00	-- common/autotest_common.sh@328 -- # grep -v Filesystem
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=4726784
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=ext4
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=10289377280
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=10310639616
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=6265810944
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=2592768
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=5242880
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=0
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=vfat
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=103061504
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=6334464
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=4096
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:25:29.714     23:59:00	-- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # avails["$mount"]=98637844480
00:25:29.714     23:59:00	-- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992
00:25:29.714     23:59:00	-- common/autotest_common.sh@364 -- # uses["$mount"]=1064935424
00:25:29.714     23:59:00	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:29.714     23:59:00	-- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n'
00:25:29.714  * Looking for test storage...
00:25:29.714     23:59:00	-- common/autotest_common.sh@369 -- # local target_space new_size
00:25:29.714     23:59:00	-- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}"
00:25:29.714      23:59:00	-- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.714      23:59:00	-- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}'
00:25:29.714     23:59:00	-- common/autotest_common.sh@373 -- # mount=/
00:25:29.714     23:59:00	-- common/autotest_common.sh@375 -- # target_space=10289377280
00:25:29.714     23:59:00	-- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size ))
00:25:29.714     23:59:00	-- common/autotest_common.sh@379 -- # (( target_space >= requested_size ))
00:25:29.714     23:59:00	-- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@381 -- # [[ / == / ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@382 -- # new_size=12525232128
00:25:29.714     23:59:00	-- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 ))
00:25:29.714     23:59:00	-- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.714     23:59:00	-- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.714     23:59:00	-- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.714  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:29.714     23:59:00	-- common/autotest_common.sh@390 -- # return 0
00:25:29.714     23:59:00	-- common/autotest_common.sh@1677 -- # set -o errtrace
00:25:29.714     23:59:00	-- common/autotest_common.sh@1678 -- # shopt -s extdebug
00:25:29.714     23:59:00	-- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:25:29.714     23:59:00	-- common/autotest_common.sh@1681 -- # PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:25:29.714     23:59:00	-- common/autotest_common.sh@1682 -- # true
00:25:29.714     23:59:00	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:25:29.714     23:59:00	-- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:25:29.714     23:59:00	-- common/autotest_common.sh@27 -- # exec
00:25:29.714     23:59:00	-- common/autotest_common.sh@29 -- # exec
00:25:29.714     23:59:00	-- common/autotest_common.sh@31 -- # xtrace_restore
00:25:29.714     23:59:00	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:25:29.714     23:59:00	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:25:29.714     23:59:00	-- common/autotest_common.sh@18 -- # set -x
00:25:29.714     23:59:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:29.714      23:59:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:29.714      23:59:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:29.714     23:59:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:29.714     23:59:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:29.714     23:59:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:29.714     23:59:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:29.714     23:59:00	-- scripts/common.sh@335 -- # IFS=.-:
00:25:29.714     23:59:00	-- scripts/common.sh@335 -- # read -ra ver1
00:25:29.714     23:59:00	-- scripts/common.sh@336 -- # IFS=.-:
00:25:29.714     23:59:00	-- scripts/common.sh@336 -- # read -ra ver2
00:25:29.714     23:59:00	-- scripts/common.sh@337 -- # local 'op=<'
00:25:29.714     23:59:00	-- scripts/common.sh@339 -- # ver1_l=2
00:25:29.714     23:59:00	-- scripts/common.sh@340 -- # ver2_l=1
00:25:29.714     23:59:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:29.714     23:59:00	-- scripts/common.sh@343 -- # case "$op" in
00:25:29.714     23:59:00	-- scripts/common.sh@344 -- # : 1
00:25:29.714     23:59:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:29.714     23:59:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:29.714      23:59:00	-- scripts/common.sh@364 -- # decimal 1
00:25:29.714      23:59:00	-- scripts/common.sh@352 -- # local d=1
00:25:29.714      23:59:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:29.714      23:59:00	-- scripts/common.sh@354 -- # echo 1
00:25:29.714     23:59:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:29.714      23:59:00	-- scripts/common.sh@365 -- # decimal 2
00:25:29.715      23:59:00	-- scripts/common.sh@352 -- # local d=2
00:25:29.715      23:59:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:29.715      23:59:00	-- scripts/common.sh@354 -- # echo 2
00:25:29.715     23:59:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:29.715     23:59:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:29.715     23:59:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:29.715     23:59:00	-- scripts/common.sh@367 -- # return 0
00:25:29.715     23:59:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:29.715     23:59:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.715  		--rc genhtml_branch_coverage=1
00:25:29.715  		--rc genhtml_function_coverage=1
00:25:29.715  		--rc genhtml_legend=1
00:25:29.715  		--rc geninfo_all_blocks=1
00:25:29.715  		--rc geninfo_unexecuted_blocks=1
00:25:29.715  		
00:25:29.715  		'
00:25:29.715     23:59:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.715  		--rc genhtml_branch_coverage=1
00:25:29.715  		--rc genhtml_function_coverage=1
00:25:29.715  		--rc genhtml_legend=1
00:25:29.715  		--rc geninfo_all_blocks=1
00:25:29.715  		--rc geninfo_unexecuted_blocks=1
00:25:29.715  		
00:25:29.715  		'
00:25:29.715     23:59:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.715  		--rc genhtml_branch_coverage=1
00:25:29.715  		--rc genhtml_function_coverage=1
00:25:29.715  		--rc genhtml_legend=1
00:25:29.715  		--rc geninfo_all_blocks=1
00:25:29.715  		--rc geninfo_unexecuted_blocks=1
00:25:29.715  		
00:25:29.715  		'
00:25:29.715     23:59:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:29.715  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:29.715  		--rc genhtml_branch_coverage=1
00:25:29.715  		--rc genhtml_function_coverage=1
00:25:29.715  		--rc genhtml_legend=1
00:25:29.715  		--rc geninfo_all_blocks=1
00:25:29.715  		--rc geninfo_unexecuted_blocks=1
00:25:29.715  		
00:25:29.715  		'
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07
00:25:29.715    23:59:00	-- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock
00:25:29.715   23:59:00	-- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:29.715   23:59:00	-- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:29.715   23:59:00	-- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132569
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 132569 /var/tmp/spdk.sock
00:25:29.715   23:59:00	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:25:29.715   23:59:00	-- common/autotest_common.sh@829 -- # '[' -z 132569 ']'
00:25:29.715   23:59:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:29.715   23:59:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:29.715  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:29.715   23:59:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:29.715   23:59:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:29.715   23:59:00	-- common/autotest_common.sh@10 -- # set +x
00:25:29.974  [2024-12-13 23:59:00.446295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:29.974  [2024-12-13 23:59:00.446489] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132569 ]
00:25:29.974  [2024-12-13 23:59:00.625152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:30.232  [2024-12-13 23:59:00.811309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:30.232  [2024-12-13 23:59:00.811455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:30.232  [2024-12-13 23:59:00.811455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:30.491  [2024-12-13 23:59:01.090736] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:30.749   23:59:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:30.749   23:59:01	-- common/autotest_common.sh@862 -- # return 0
00:25:30.749   23:59:01	-- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem
00:25:30.749   23:59:01	-- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:31.317  Malloc0
00:25:31.317  Malloc1
00:25:31.317  Malloc2
00:25:31.317   23:59:01	-- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio
00:25:31.317    23:59:01	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:31.317   23:59:01	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:31.317   23:59:01	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:31.317  5000+0 records in
00:25:31.317  5000+0 records out
00:25:31.317  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195613 s, 523 MB/s
00:25:31.317   23:59:01	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:31.576  AIO0
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 132569
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 132569 without_thd
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=132569
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:25:31.576    23:59:02	-- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:31.576     23:59:02	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:31.576     23:59:02	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@85 -- # echo 1
00:25:31.576   23:59:02	-- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:25:31.576    23:59:02	-- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4
00:25:31.576    23:59:02	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:31.576     23:59:02	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:31.576     23:59:02	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:31.835    23:59:02	-- interrupt/interrupt_common.sh@85 -- # echo ''
00:25:31.835   23:59:02	-- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:25:31.835  spdk_thread ids are 1 on reactor0.
00:25:31.835   23:59:02	-- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:25:31.835   23:59:02	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:31.835   23:59:02	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132569 0
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132569 0 idle
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:31.835   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:31.835    23:59:02	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:31.835    23:59:02	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132569 root      20   0   20.1t 146220  29048 S   0.0   1.2   0:00.72 reactor_0'
00:25:32.094    23:59:02	-- interrupt/interrupt_common.sh@48 -- # echo 132569 root 20 0 20.1t 146220 29048 S 0.0 1.2 0:00.72 reactor_0
00:25:32.094    23:59:02	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:32.094    23:59:02	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:32.094   23:59:02	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:32.094   23:59:02	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132569 1
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132569 1 idle
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@34 -- # local idx=1
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:32.094   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:32.094    23:59:02	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:32.094    23:59:02	-- interrupt/interrupt_common.sh@47 -- # grep reactor_1
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132572 root      20   0   20.1t 146220  29048 S   0.0   1.2   0:00.00 reactor_1'
00:25:32.353    23:59:02	-- interrupt/interrupt_common.sh@48 -- # echo 132572 root 20 0 20.1t 146220 29048 S 0.0 1.2 0:00.00 reactor_1
00:25:32.353    23:59:02	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:32.353    23:59:02	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:32.353   23:59:02	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:32.353   23:59:02	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132569 2
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132569 2 idle
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:32.353   23:59:02	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:32.353    23:59:02	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:32.353    23:59:02	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132573 root      20   0   20.1t 146220  29048 S   0.0   1.2   0:00.00 reactor_2'
00:25:32.353    23:59:03	-- interrupt/interrupt_common.sh@48 -- # echo 132573 root 20 0 20.1t 146220 29048 S 0.0 1.2 0:00.00 reactor_2
00:25:32.353    23:59:03	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:32.353    23:59:03	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:32.353   23:59:03	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:32.353   23:59:03	-- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']'
00:25:32.353   23:59:03	-- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}"
00:25:32.353   23:59:03	-- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2
00:25:32.612  [2024-12-13 23:59:03.223393] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:32.612   23:59:03	-- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:25:32.871  [2024-12-13 23:59:03.411122] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:25:32.871  [2024-12-13 23:59:03.411869] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:32.871   23:59:03	-- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:25:32.871  [2024-12-13 23:59:03.598973] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:25:32.871  [2024-12-13 23:59:03.599349] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:33.130   23:59:03	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:33.130   23:59:03	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132569 0
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132569 0 busy
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132569 root      20   0   20.1t 146328  29048 R  99.9   1.2   0:01.09 reactor_0'
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@48 -- # echo 132569 root 20 0 20.1t 146328 29048 R 99.9 1.2 0:01.09 reactor_0
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:33.130   23:59:03	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:33.130   23:59:03	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132569 2
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132569 2 busy
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:33.130   23:59:03	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:33.130    23:59:03	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132573 root      20   0   20.1t 146328  29048 R  99.9   1.2   0:00.34 reactor_2'
00:25:33.389    23:59:03	-- interrupt/interrupt_common.sh@48 -- # echo 132573 root 20 0 20.1t 146328 29048 R 99.9 1.2 0:00.34 reactor_2
00:25:33.389    23:59:03	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:33.389    23:59:03	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:33.389   23:59:03	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:33.389   23:59:03	-- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:25:33.648  [2024-12-13 23:59:04.179034] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:25:33.648  [2024-12-13 23:59:04.179383] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:33.648   23:59:04	-- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']'
00:25:33.648   23:59:04	-- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 132569 2
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132569 2 idle
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:33.648    23:59:04	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:33.648    23:59:04	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132573 root      20   0   20.1t 146392  29048 S   0.0   1.2   0:00.58 reactor_2'
00:25:33.648    23:59:04	-- interrupt/interrupt_common.sh@48 -- # echo 132573 root 20 0 20.1t 146392 29048 S 0.0 1.2 0:00.58 reactor_2
00:25:33.648    23:59:04	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:33.648    23:59:04	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:33.648   23:59:04	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:33.648   23:59:04	-- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:25:33.907  [2024-12-13 23:59:04.595026] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:25:33.907  [2024-12-13 23:59:04.595425] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:33.907   23:59:04	-- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']'
00:25:33.907   23:59:04	-- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}"
00:25:33.907   23:59:04	-- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1
00:25:34.166  [2024-12-13 23:59:04.779395] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:34.166   23:59:04	-- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 132569 0
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132569 0 idle
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@33 -- # local pid=132569
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:34.166   23:59:04	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:34.166    23:59:04	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132569 -w 256
00:25:34.166    23:59:04	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132569 root      20   0   20.1t 146484  29048 S   0.0   1.2   0:01.91 reactor_0'
00:25:34.424    23:59:04	-- interrupt/interrupt_common.sh@48 -- # echo 132569 root 20 0 20.1t 146484 29048 S 0.0 1.2 0:01.91 reactor_0
00:25:34.424    23:59:04	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:34.424    23:59:04	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:34.424   23:59:04	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:34.424   23:59:04	-- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:25:34.424   23:59:04	-- interrupt/reactor_set_interrupt.sh@77 -- # return 0
00:25:34.424   23:59:04	-- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT
00:25:34.424   23:59:04	-- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 132569
00:25:34.424   23:59:04	-- common/autotest_common.sh@936 -- # '[' -z 132569 ']'
00:25:34.424   23:59:04	-- common/autotest_common.sh@940 -- # kill -0 132569
00:25:34.424    23:59:04	-- common/autotest_common.sh@941 -- # uname
00:25:34.424   23:59:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:34.424    23:59:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132569
00:25:34.424   23:59:04	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:34.424   23:59:04	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:34.425  killing process with pid 132569
00:25:34.425   23:59:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 132569'
00:25:34.425   23:59:04	-- common/autotest_common.sh@955 -- # kill 132569
00:25:34.425   23:59:04	-- common/autotest_common.sh@960 -- # wait 132569
00:25:35.802   23:59:06	-- interrupt/reactor_set_interrupt.sh@94 -- # cleanup
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:35.802   23:59:06	-- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132713
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:35.802   23:59:06	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 132713 /var/tmp/spdk.sock
00:25:35.802   23:59:06	-- common/autotest_common.sh@829 -- # '[' -z 132713 ']'
00:25:35.802   23:59:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:35.802   23:59:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:35.802  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:35.802   23:59:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:35.802   23:59:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:35.802   23:59:06	-- common/autotest_common.sh@10 -- # set +x
00:25:35.802  [2024-12-13 23:59:06.259516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:35.802  [2024-12-13 23:59:06.259696] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132713 ]
00:25:35.802  [2024-12-13 23:59:06.438328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:36.061  [2024-12-13 23:59:06.629311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:36.061  [2024-12-13 23:59:06.629455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:36.061  [2024-12-13 23:59:06.629717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:36.319  [2024-12-13 23:59:06.907569] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:36.578   23:59:07	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:36.578   23:59:07	-- common/autotest_common.sh@862 -- # return 0
00:25:36.578   23:59:07	-- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem
00:25:36.578   23:59:07	-- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:36.836  Malloc0
00:25:36.836  Malloc1
00:25:36.836  Malloc2
00:25:36.836   23:59:07	-- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio
00:25:36.836    23:59:07	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:36.836   23:59:07	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:36.837   23:59:07	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:36.837  5000+0 records in
00:25:36.837  5000+0 records out
00:25:36.837  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0279147 s, 367 MB/s
00:25:36.837   23:59:07	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:37.095  AIO0
00:25:37.095   23:59:07	-- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 132713
00:25:37.095   23:59:07	-- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 132713
00:25:37.095   23:59:07	-- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=132713
00:25:37.095   23:59:07	-- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=
00:25:37.095   23:59:07	-- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:25:37.095    23:59:07	-- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:25:37.095    23:59:07	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1
00:25:37.095    23:59:07	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:37.095    23:59:07	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1
00:25:37.095    23:59:07	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:37.095     23:59:07	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:37.095     23:59:07	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:37.354    23:59:08	-- interrupt/interrupt_common.sh@85 -- # echo 1
00:25:37.354   23:59:08	-- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:25:37.354    23:59:08	-- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:25:37.354    23:59:08	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4
00:25:37.354    23:59:08	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:37.354    23:59:08	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4
00:25:37.354    23:59:08	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:37.354     23:59:08	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:37.354     23:59:08	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:37.613    23:59:08	-- interrupt/interrupt_common.sh@85 -- # echo ''
00:25:37.613   23:59:08	-- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:25:37.613   23:59:08	-- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:25:37.613  spdk_thread ids are 1 on reactor0.
00:25:37.613   23:59:08	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:37.613   23:59:08	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132713 0
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132713 0 idle
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:37.613   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:37.613    23:59:08	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:37.613    23:59:08	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132713 root      20   0   20.1t 146148  28996 S   0.0   1.2   0:00.72 reactor_0'
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # echo 132713 root 20 0 20.1t 146148 28996 S 0.0 1.2 0:00.72 reactor_0
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:37.872   23:59:08	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:37.872   23:59:08	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132713 1
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132713 1 idle
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@34 -- # local idx=1
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@47 -- # grep reactor_1
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132718 root      20   0   20.1t 146148  28996 S   0.0   1.2   0:00.00 reactor_1'
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # echo 132718 root 20 0 20.1t 146148 28996 S 0.0 1.2 0:00.00 reactor_1
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:37.872   23:59:08	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:37.872   23:59:08	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 132713 2
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132713 2 idle
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:37.872   23:59:08	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:37.872    23:59:08	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132723 root      20   0   20.1t 146148  28996 S   0.0   1.2   0:00.00 reactor_2'
00:25:38.131    23:59:08	-- interrupt/interrupt_common.sh@48 -- # echo 132723 root 20 0 20.1t 146148 28996 S 0.0 1.2 0:00.00 reactor_2
00:25:38.131    23:59:08	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:38.131    23:59:08	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:38.131   23:59:08	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:38.131   23:59:08	-- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']'
00:25:38.131   23:59:08	-- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:25:38.390  [2024-12-13 23:59:08.991849] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:25:38.390  [2024-12-13 23:59:08.992124] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode.
00:25:38.390  [2024-12-13 23:59:08.992486] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:38.390   23:59:09	-- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:25:38.649  [2024-12-13 23:59:09.251662] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:25:38.649  [2024-12-13 23:59:09.252250] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:38.649   23:59:09	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:38.649   23:59:09	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132713 0
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132713 0 busy
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:38.649   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:38.649    23:59:09	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:38.649    23:59:09	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:38.908   23:59:09	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132713 root      20   0   20.1t 146236  28996 R  99.9   1.2   0:01.16 reactor_0'
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # echo 132713 root 20 0 20.1t 146236 28996 R 99.9 1.2 0:01.16 reactor_0
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:38.909   23:59:09	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:38.909   23:59:09	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 132713 2
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 132713 2 busy
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132723 root      20   0   20.1t 146236  28996 R  99.9   1.2   0:00.34 reactor_2'
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # echo 132723 root 20 0 20.1t 146236 28996 R 99.9 1.2 0:00.34 reactor_2
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:38.909    23:59:09	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:38.909   23:59:09	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:38.909   23:59:09	-- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:25:39.167  [2024-12-13 23:59:09.860029] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:25:39.167  [2024-12-13 23:59:09.860221] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:39.167   23:59:09	-- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']'
00:25:39.167   23:59:09	-- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 132713 2
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132713 2 idle
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:39.167   23:59:09	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:39.167    23:59:09	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:39.167    23:59:09	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132723 root      20   0   20.1t 146300  28996 S   0.0   1.2   0:00.60 reactor_2'
00:25:39.426    23:59:10	-- interrupt/interrupt_common.sh@48 -- # echo 132723 root 20 0 20.1t 146300 28996 S 0.0 1.2 0:00.60 reactor_2
00:25:39.426    23:59:10	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:39.426    23:59:10	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:39.426   23:59:10	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:39.426   23:59:10	-- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:25:39.685  [2024-12-13 23:59:10.280090] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:25:39.685  [2024-12-13 23:59:10.280449] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode.
00:25:39.685  [2024-12-13 23:59:10.280489] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:39.685   23:59:10	-- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']'
00:25:39.685   23:59:10	-- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 132713 0
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 132713 0 idle
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@33 -- # local pid=132713
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:39.685   23:59:10	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:39.685    23:59:10	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 132713 -w 256
00:25:39.685    23:59:10	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 132713 root      20   0   20.1t 146344  28996 S   6.7   1.2   0:02.02 reactor_0'
00:25:39.944    23:59:10	-- interrupt/interrupt_common.sh@48 -- # echo 132713 root 20 0 20.1t 146344 28996 S 6.7 1.2 0:02.02 reactor_0
00:25:39.944    23:59:10	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:39.944    23:59:10	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=6
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]]
00:25:39.944   23:59:10	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:39.944   23:59:10	-- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:25:39.944   23:59:10	-- interrupt/reactor_set_interrupt.sh@82 -- # return 0
00:25:39.944   23:59:10	-- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:25:39.944   23:59:10	-- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 132713
00:25:39.944   23:59:10	-- common/autotest_common.sh@936 -- # '[' -z 132713 ']'
00:25:39.944   23:59:10	-- common/autotest_common.sh@940 -- # kill -0 132713
00:25:39.944    23:59:10	-- common/autotest_common.sh@941 -- # uname
00:25:39.944   23:59:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:39.944    23:59:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132713
00:25:39.944   23:59:10	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:39.944   23:59:10	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:39.944   23:59:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 132713'
00:25:39.944  killing process with pid 132713
00:25:39.944   23:59:10	-- common/autotest_common.sh@955 -- # kill 132713
00:25:39.944   23:59:10	-- common/autotest_common.sh@960 -- # wait 132713
00:25:41.320   23:59:11	-- interrupt/reactor_set_interrupt.sh@105 -- # cleanup
00:25:41.320   23:59:11	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:41.320  
00:25:41.320  real	0m11.734s
00:25:41.320  user	0m11.899s
00:25:41.320  sys	0m1.844s
00:25:41.320   23:59:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:41.320   23:59:11	-- common/autotest_common.sh@10 -- # set +x
00:25:41.320  ************************************
00:25:41.320  END TEST reactor_set_interrupt
00:25:41.320  ************************************
00:25:41.320   23:59:11	-- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:41.320   23:59:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:41.320   23:59:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:41.320   23:59:11	-- common/autotest_common.sh@10 -- # set +x
00:25:41.320  ************************************
00:25:41.320  START TEST reap_unregistered_poller
00:25:41.320  ************************************
00:25:41.320   23:59:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:41.320  * Looking for test storage...
00:25:41.320  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.320    23:59:11	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:41.320     23:59:11	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:41.320     23:59:11	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:41.320    23:59:11	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:41.320    23:59:11	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:41.320    23:59:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:41.320    23:59:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:41.320    23:59:11	-- scripts/common.sh@335 -- # IFS=.-:
00:25:41.320    23:59:11	-- scripts/common.sh@335 -- # read -ra ver1
00:25:41.320    23:59:11	-- scripts/common.sh@336 -- # IFS=.-:
00:25:41.320    23:59:11	-- scripts/common.sh@336 -- # read -ra ver2
00:25:41.320    23:59:11	-- scripts/common.sh@337 -- # local 'op=<'
00:25:41.320    23:59:11	-- scripts/common.sh@339 -- # ver1_l=2
00:25:41.320    23:59:11	-- scripts/common.sh@340 -- # ver2_l=1
00:25:41.320    23:59:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:41.320    23:59:11	-- scripts/common.sh@343 -- # case "$op" in
00:25:41.320    23:59:11	-- scripts/common.sh@344 -- # : 1
00:25:41.320    23:59:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:41.320    23:59:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:41.320     23:59:11	-- scripts/common.sh@364 -- # decimal 1
00:25:41.320     23:59:11	-- scripts/common.sh@352 -- # local d=1
00:25:41.320     23:59:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:41.320     23:59:11	-- scripts/common.sh@354 -- # echo 1
00:25:41.320    23:59:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:41.320     23:59:11	-- scripts/common.sh@365 -- # decimal 2
00:25:41.320     23:59:11	-- scripts/common.sh@352 -- # local d=2
00:25:41.320     23:59:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:41.320     23:59:11	-- scripts/common.sh@354 -- # echo 2
00:25:41.320    23:59:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:41.320    23:59:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:41.320    23:59:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:41.320    23:59:11	-- scripts/common.sh@367 -- # return 0
00:25:41.320    23:59:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:41.320    23:59:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:41.320  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.320  		--rc genhtml_branch_coverage=1
00:25:41.320  		--rc genhtml_function_coverage=1
00:25:41.320  		--rc genhtml_legend=1
00:25:41.320  		--rc geninfo_all_blocks=1
00:25:41.320  		--rc geninfo_unexecuted_blocks=1
00:25:41.320  		
00:25:41.320  		'
00:25:41.320    23:59:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:41.320  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.320  		--rc genhtml_branch_coverage=1
00:25:41.320  		--rc genhtml_function_coverage=1
00:25:41.320  		--rc genhtml_legend=1
00:25:41.320  		--rc geninfo_all_blocks=1
00:25:41.320  		--rc geninfo_unexecuted_blocks=1
00:25:41.320  		
00:25:41.320  		'
00:25:41.320    23:59:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:41.320  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.320  		--rc genhtml_branch_coverage=1
00:25:41.320  		--rc genhtml_function_coverage=1
00:25:41.320  		--rc genhtml_legend=1
00:25:41.320  		--rc geninfo_all_blocks=1
00:25:41.320  		--rc geninfo_unexecuted_blocks=1
00:25:41.320  		
00:25:41.320  		'
00:25:41.320    23:59:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:41.320  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.320  		--rc genhtml_branch_coverage=1
00:25:41.320  		--rc genhtml_function_coverage=1
00:25:41.320  		--rc genhtml_legend=1
00:25:41.320  		--rc geninfo_all_blocks=1
00:25:41.320  		--rc geninfo_unexecuted_blocks=1
00:25:41.320  		
00:25:41.320  		'
00:25:41.320   23:59:11	-- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:25:41.320      23:59:11	-- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:41.321     23:59:11	-- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.321    23:59:11	-- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.321     23:59:11	-- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:25:41.321    23:59:11	-- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:25:41.321    23:59:11	-- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:25:41.321     23:59:11	-- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:25:41.321     23:59:11	-- common/autotest_common.sh@34 -- # set -e
00:25:41.321     23:59:11	-- common/autotest_common.sh@35 -- # shopt -s nullglob
00:25:41.321     23:59:11	-- common/autotest_common.sh@36 -- # shopt -s extglob
00:25:41.321     23:59:11	-- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:25:41.321     23:59:11	-- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:25:41.321      23:59:11	-- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:25:41.321      23:59:11	-- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:25:41.321      23:59:11	-- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:25:41.321      23:59:11	-- common/build_config.sh@5 -- # CONFIG_USDT=n
00:25:41.321      23:59:11	-- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:25:41.321      23:59:11	-- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:25:41.321      23:59:11	-- common/build_config.sh@8 -- # CONFIG_RBD=n
00:25:41.321      23:59:11	-- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:25:41.321      23:59:11	-- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:25:41.321      23:59:11	-- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:25:41.321      23:59:11	-- common/build_config.sh@12 -- # CONFIG_SMA=n
00:25:41.321      23:59:11	-- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:25:41.321      23:59:11	-- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:25:41.321      23:59:11	-- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:25:41.321      23:59:11	-- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n
00:25:41.321      23:59:11	-- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:25:41.321      23:59:11	-- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:41.321      23:59:11	-- common/build_config.sh@20 -- # CONFIG_LTO=n
00:25:41.321      23:59:11	-- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y
00:25:41.321      23:59:11	-- common/build_config.sh@22 -- # CONFIG_CET=n
00:25:41.321      23:59:11	-- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:25:41.321      23:59:11	-- common/build_config.sh@24 -- # CONFIG_OCF_PATH=
00:25:41.321      23:59:11	-- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y
00:25:41.321      23:59:11	-- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n
00:25:41.321      23:59:11	-- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n
00:25:41.321      23:59:11	-- common/build_config.sh@28 -- # CONFIG_UBLK=n
00:25:41.321      23:59:11	-- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y
00:25:41.321      23:59:11	-- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH=
00:25:41.321      23:59:11	-- common/build_config.sh@31 -- # CONFIG_OCF=n
00:25:41.321      23:59:11	-- common/build_config.sh@32 -- # CONFIG_FUSE=n
00:25:41.321      23:59:11	-- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=
00:25:41.321      23:59:11	-- common/build_config.sh@35 -- # CONFIG_FUZZER=n
00:25:41.321      23:59:11	-- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:25:41.321      23:59:11	-- common/build_config.sh@37 -- # CONFIG_CRYPTO=n
00:25:41.321      23:59:11	-- common/build_config.sh@38 -- # CONFIG_PGO_USE=n
00:25:41.321      23:59:11	-- common/build_config.sh@39 -- # CONFIG_VHOST=y
00:25:41.321      23:59:11	-- common/build_config.sh@40 -- # CONFIG_DAOS=n
00:25:41.321      23:59:11	-- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@42 -- # CONFIG_DAOS_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y
00:25:41.321      23:59:11	-- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:25:41.321      23:59:11	-- common/build_config.sh@45 -- # CONFIG_VIRTIO=y
00:25:41.321      23:59:11	-- common/build_config.sh@46 -- # CONFIG_COVERAGE=y
00:25:41.321      23:59:11	-- common/build_config.sh@47 -- # CONFIG_RDMA=y
00:25:41.321      23:59:11	-- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:25:41.321      23:59:11	-- common/build_config.sh@49 -- # CONFIG_URING_PATH=
00:25:41.321      23:59:11	-- common/build_config.sh@50 -- # CONFIG_XNVME=n
00:25:41.321      23:59:11	-- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n
00:25:41.321      23:59:11	-- common/build_config.sh@52 -- # CONFIG_ARCH=native
00:25:41.321      23:59:11	-- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n
00:25:41.321      23:59:11	-- common/build_config.sh@54 -- # CONFIG_WERROR=y
00:25:41.321      23:59:11	-- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n
00:25:41.321      23:59:11	-- common/build_config.sh@56 -- # CONFIG_UBSAN=y
00:25:41.321      23:59:11	-- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@58 -- # CONFIG_GOLANG=n
00:25:41.321      23:59:11	-- common/build_config.sh@59 -- # CONFIG_ISAL=y
00:25:41.321      23:59:11	-- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n
00:25:41.321      23:59:11	-- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=
00:25:41.321      23:59:11	-- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs
00:25:41.321      23:59:11	-- common/build_config.sh@63 -- # CONFIG_APPS=y
00:25:41.321      23:59:11	-- common/build_config.sh@64 -- # CONFIG_SHARED=n
00:25:41.321      23:59:11	-- common/build_config.sh@65 -- # CONFIG_FC_PATH=
00:25:41.321      23:59:11	-- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n
00:25:41.321      23:59:11	-- common/build_config.sh@67 -- # CONFIG_FC=n
00:25:41.321      23:59:11	-- common/build_config.sh@68 -- # CONFIG_AVAHI=n
00:25:41.321      23:59:11	-- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y
00:25:41.321      23:59:11	-- common/build_config.sh@70 -- # CONFIG_RAID5F=y
00:25:41.321      23:59:11	-- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y
00:25:41.321      23:59:11	-- common/build_config.sh@72 -- # CONFIG_TESTS=y
00:25:41.321      23:59:11	-- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n
00:25:41.321      23:59:11	-- common/build_config.sh@74 -- # CONFIG_MAX_LCORES=
00:25:41.321      23:59:11	-- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n
00:25:41.321      23:59:11	-- common/build_config.sh@76 -- # CONFIG_DEBUG=y
00:25:41.321      23:59:11	-- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n
00:25:41.321      23:59:11	-- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX=
00:25:41.321      23:59:11	-- common/build_config.sh@79 -- # CONFIG_URING=n
00:25:41.321     23:59:11	-- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:41.321        23:59:11	-- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:41.321       23:59:11	-- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:25:41.321      23:59:12	-- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:25:41.321      23:59:12	-- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:25:41.321      23:59:12	-- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:25:41.321      23:59:12	-- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:25:41.321      23:59:12	-- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:25:41.321      23:59:12	-- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:25:41.321      23:59:12	-- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:25:41.321      23:59:12	-- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:25:41.321      23:59:12	-- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:25:41.321      23:59:12	-- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:25:41.321      23:59:12	-- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:25:41.321      23:59:12	-- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:25:41.321      23:59:12	-- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:25:41.321  #define SPDK_CONFIG_H
00:25:41.321  #define SPDK_CONFIG_APPS 1
00:25:41.321  #define SPDK_CONFIG_ARCH native
00:25:41.321  #define SPDK_CONFIG_ASAN 1
00:25:41.321  #undef SPDK_CONFIG_AVAHI
00:25:41.321  #undef SPDK_CONFIG_CET
00:25:41.321  #define SPDK_CONFIG_COVERAGE 1
00:25:41.321  #define SPDK_CONFIG_CROSS_PREFIX 
00:25:41.321  #undef SPDK_CONFIG_CRYPTO
00:25:41.321  #undef SPDK_CONFIG_CRYPTO_MLX5
00:25:41.321  #undef SPDK_CONFIG_CUSTOMOCF
00:25:41.321  #undef SPDK_CONFIG_DAOS
00:25:41.321  #define SPDK_CONFIG_DAOS_DIR 
00:25:41.321  #define SPDK_CONFIG_DEBUG 1
00:25:41.321  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:25:41.321  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:25:41.321  #define SPDK_CONFIG_DPDK_INC_DIR 
00:25:41.321  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:25:41.321  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:25:41.321  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:41.321  #define SPDK_CONFIG_EXAMPLES 1
00:25:41.321  #undef SPDK_CONFIG_FC
00:25:41.321  #define SPDK_CONFIG_FC_PATH 
00:25:41.321  #define SPDK_CONFIG_FIO_PLUGIN 1
00:25:41.321  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:25:41.321  #undef SPDK_CONFIG_FUSE
00:25:41.321  #undef SPDK_CONFIG_FUZZER
00:25:41.321  #define SPDK_CONFIG_FUZZER_LIB 
00:25:41.321  #undef SPDK_CONFIG_GOLANG
00:25:41.321  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:25:41.321  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:25:41.321  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:25:41.321  #undef SPDK_CONFIG_HAVE_LIBBSD
00:25:41.321  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:25:41.321  #define SPDK_CONFIG_IDXD 1
00:25:41.321  #undef SPDK_CONFIG_IDXD_KERNEL
00:25:41.321  #undef SPDK_CONFIG_IPSEC_MB
00:25:41.321  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:25:41.321  #define SPDK_CONFIG_ISAL 1
00:25:41.321  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:25:41.321  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:25:41.321  #define SPDK_CONFIG_LIBDIR 
00:25:41.321  #undef SPDK_CONFIG_LTO
00:25:41.322  #define SPDK_CONFIG_MAX_LCORES 
00:25:41.322  #define SPDK_CONFIG_NVME_CUSE 1
00:25:41.322  #undef SPDK_CONFIG_OCF
00:25:41.322  #define SPDK_CONFIG_OCF_PATH 
00:25:41.322  #define SPDK_CONFIG_OPENSSL_PATH 
00:25:41.322  #undef SPDK_CONFIG_PGO_CAPTURE
00:25:41.322  #undef SPDK_CONFIG_PGO_USE
00:25:41.322  #define SPDK_CONFIG_PREFIX /usr/local
00:25:41.322  #define SPDK_CONFIG_RAID5F 1
00:25:41.322  #undef SPDK_CONFIG_RBD
00:25:41.322  #define SPDK_CONFIG_RDMA 1
00:25:41.322  #define SPDK_CONFIG_RDMA_PROV verbs
00:25:41.322  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:25:41.322  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:25:41.322  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:25:41.322  #undef SPDK_CONFIG_SHARED
00:25:41.322  #undef SPDK_CONFIG_SMA
00:25:41.322  #define SPDK_CONFIG_TESTS 1
00:25:41.322  #undef SPDK_CONFIG_TSAN
00:25:41.322  #undef SPDK_CONFIG_UBLK
00:25:41.322  #define SPDK_CONFIG_UBSAN 1
00:25:41.322  #define SPDK_CONFIG_UNIT_TESTS 1
00:25:41.322  #undef SPDK_CONFIG_URING
00:25:41.322  #define SPDK_CONFIG_URING_PATH 
00:25:41.322  #undef SPDK_CONFIG_URING_ZNS
00:25:41.322  #undef SPDK_CONFIG_USDT
00:25:41.322  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:25:41.322  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:25:41.322  #undef SPDK_CONFIG_VFIO_USER
00:25:41.322  #define SPDK_CONFIG_VFIO_USER_DIR 
00:25:41.322  #define SPDK_CONFIG_VHOST 1
00:25:41.322  #define SPDK_CONFIG_VIRTIO 1
00:25:41.322  #undef SPDK_CONFIG_VTUNE
00:25:41.322  #define SPDK_CONFIG_VTUNE_DIR 
00:25:41.322  #define SPDK_CONFIG_WERROR 1
00:25:41.322  #define SPDK_CONFIG_WPDK_DIR 
00:25:41.322  #undef SPDK_CONFIG_XNVME
00:25:41.322  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:25:41.322      23:59:12	-- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:25:41.322     23:59:12	-- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:41.322      23:59:12	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:41.322      23:59:12	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:41.322      23:59:12	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:41.322       23:59:12	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:41.322       23:59:12	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:41.322       23:59:12	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:41.322       23:59:12	-- paths/export.sh@5 -- # export PATH
00:25:41.322       23:59:12	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:41.322     23:59:12	-- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:41.322        23:59:12	-- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:41.322       23:59:12	-- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:41.322      23:59:12	-- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:41.322       23:59:12	-- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:25:41.322      23:59:12	-- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:25:41.322      23:59:12	-- pm/common@16 -- # TEST_TAG=N/A
00:25:41.322      23:59:12	-- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:25:41.322     23:59:12	-- common/autotest_common.sh@52 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@53 -- # export RUN_NIGHTLY
00:25:41.322     23:59:12	-- common/autotest_common.sh@56 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:25:41.322     23:59:12	-- common/autotest_common.sh@58 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND
00:25:41.322     23:59:12	-- common/autotest_common.sh@60 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:25:41.322     23:59:12	-- common/autotest_common.sh@62 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST
00:25:41.322     23:59:12	-- common/autotest_common.sh@64 -- # :
00:25:41.322     23:59:12	-- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD
00:25:41.322     23:59:12	-- common/autotest_common.sh@66 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD
00:25:41.322     23:59:12	-- common/autotest_common.sh@68 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL
00:25:41.322     23:59:12	-- common/autotest_common.sh@70 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI
00:25:41.322     23:59:12	-- common/autotest_common.sh@72 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR
00:25:41.322     23:59:12	-- common/autotest_common.sh@74 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME
00:25:41.322     23:59:12	-- common/autotest_common.sh@76 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR
00:25:41.322     23:59:12	-- common/autotest_common.sh@78 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP
00:25:41.322     23:59:12	-- common/autotest_common.sh@80 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI
00:25:41.322     23:59:12	-- common/autotest_common.sh@82 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE
00:25:41.322     23:59:12	-- common/autotest_common.sh@84 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP
00:25:41.322     23:59:12	-- common/autotest_common.sh@86 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF
00:25:41.322     23:59:12	-- common/autotest_common.sh@88 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER
00:25:41.322     23:59:12	-- common/autotest_common.sh@90 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU
00:25:41.322     23:59:12	-- common/autotest_common.sh@92 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER
00:25:41.322     23:59:12	-- common/autotest_common.sh@94 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT
00:25:41.322     23:59:12	-- common/autotest_common.sh@96 -- # : rdma
00:25:41.322     23:59:12	-- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT
00:25:41.322     23:59:12	-- common/autotest_common.sh@98 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD
00:25:41.322     23:59:12	-- common/autotest_common.sh@100 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST
00:25:41.322     23:59:12	-- common/autotest_common.sh@102 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV
00:25:41.322     23:59:12	-- common/autotest_common.sh@104 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT
00:25:41.322     23:59:12	-- common/autotest_common.sh@106 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS
00:25:41.322     23:59:12	-- common/autotest_common.sh@108 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT
00:25:41.322     23:59:12	-- common/autotest_common.sh@110 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL
00:25:41.322     23:59:12	-- common/autotest_common.sh@112 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS
00:25:41.322     23:59:12	-- common/autotest_common.sh@114 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN
00:25:41.322     23:59:12	-- common/autotest_common.sh@116 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN
00:25:41.322     23:59:12	-- common/autotest_common.sh@118 -- # :
00:25:41.322     23:59:12	-- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK
00:25:41.322     23:59:12	-- common/autotest_common.sh@120 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT
00:25:41.322     23:59:12	-- common/autotest_common.sh@122 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO
00:25:41.322     23:59:12	-- common/autotest_common.sh@124 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL
00:25:41.322     23:59:12	-- common/autotest_common.sh@126 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF
00:25:41.322     23:59:12	-- common/autotest_common.sh@128 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD
00:25:41.322     23:59:12	-- common/autotest_common.sh@130 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL
00:25:41.322     23:59:12	-- common/autotest_common.sh@132 -- # :
00:25:41.322     23:59:12	-- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK
00:25:41.322     23:59:12	-- common/autotest_common.sh@134 -- # : true
00:25:41.322     23:59:12	-- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X
00:25:41.322     23:59:12	-- common/autotest_common.sh@136 -- # : 1
00:25:41.322     23:59:12	-- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5
00:25:41.322     23:59:12	-- common/autotest_common.sh@138 -- # : 0
00:25:41.322     23:59:12	-- common/autotest_common.sh@139 -- # export SPDK_TEST_URING
00:25:41.323     23:59:12	-- common/autotest_common.sh@140 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT
00:25:41.323     23:59:12	-- common/autotest_common.sh@142 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO
00:25:41.323     23:59:12	-- common/autotest_common.sh@144 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER
00:25:41.323     23:59:12	-- common/autotest_common.sh@146 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD
00:25:41.323     23:59:12	-- common/autotest_common.sh@148 -- # :
00:25:41.323     23:59:12	-- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS
00:25:41.323     23:59:12	-- common/autotest_common.sh@150 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA
00:25:41.323     23:59:12	-- common/autotest_common.sh@152 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS
00:25:41.323     23:59:12	-- common/autotest_common.sh@154 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME
00:25:41.323     23:59:12	-- common/autotest_common.sh@156 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA
00:25:41.323     23:59:12	-- common/autotest_common.sh@158 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA
00:25:41.323     23:59:12	-- common/autotest_common.sh@160 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT
00:25:41.323     23:59:12	-- common/autotest_common.sh@163 -- # :
00:25:41.323     23:59:12	-- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET
00:25:41.323     23:59:12	-- common/autotest_common.sh@165 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS
00:25:41.323     23:59:12	-- common/autotest_common.sh@167 -- # : 0
00:25:41.323     23:59:12	-- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT
00:25:41.323     23:59:12	-- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:41.323     23:59:12	-- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:25:41.323     23:59:12	-- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:25:41.323     23:59:12	-- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:41.323     23:59:12	-- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:41.323     23:59:12	-- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1
00:25:41.323     23:59:12	-- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1
00:25:41.323     23:59:12	-- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:41.323     23:59:12	-- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:41.323     23:59:12	-- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:41.323     23:59:12	-- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:41.323     23:59:12	-- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:25:41.323     23:59:12	-- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file
00:25:41.323     23:59:12	-- common/autotest_common.sh@196 -- # cat
00:25:41.323     23:59:12	-- common/autotest_common.sh@222 -- # echo leak:libfuse3.so
00:25:41.323     23:59:12	-- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:41.323     23:59:12	-- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:41.323     23:59:12	-- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:41.323     23:59:12	-- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:41.323     23:59:12	-- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']'
00:25:41.323     23:59:12	-- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR
00:25:41.323     23:59:12	-- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:41.323     23:59:12	-- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:41.323     23:59:12	-- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:41.323     23:59:12	-- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:41.323     23:59:12	-- common/autotest_common.sh@239 -- # export QEMU_BIN=
00:25:41.323     23:59:12	-- common/autotest_common.sh@239 -- # QEMU_BIN=
00:25:41.323     23:59:12	-- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:41.323     23:59:12	-- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:41.323     23:59:12	-- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:41.323     23:59:12	-- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:41.323     23:59:12	-- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:41.323     23:59:12	-- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:41.323     23:59:12	-- common/autotest_common.sh@247 -- # _LCOV_MAIN=0
00:25:41.323     23:59:12	-- common/autotest_common.sh@248 -- # _LCOV_LLVM=1
00:25:41.323     23:59:12	-- common/autotest_common.sh@249 -- # _LCOV=
00:25:41.323     23:59:12	-- common/autotest_common.sh@250 -- # [[ '' == *clang* ]]
00:25:41.323     23:59:12	-- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]]
00:25:41.323     23:59:12	-- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:25:41.323     23:59:12	-- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]=
00:25:41.323     23:59:12	-- common/autotest_common.sh@255 -- # lcov_opt=
00:25:41.323     23:59:12	-- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']'
00:25:41.323     23:59:12	-- common/autotest_common.sh@259 -- # export valgrind=
00:25:41.323     23:59:12	-- common/autotest_common.sh@259 -- # valgrind=
00:25:41.323      23:59:12	-- common/autotest_common.sh@265 -- # uname -s
00:25:41.583     23:59:12	-- common/autotest_common.sh@265 -- # '[' Linux = Linux ']'
00:25:41.583     23:59:12	-- common/autotest_common.sh@266 -- # HUGEMEM=4096
00:25:41.583     23:59:12	-- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes
00:25:41.583     23:59:12	-- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes
00:25:41.583     23:59:12	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@275 -- # MAKE=make
00:25:41.583     23:59:12	-- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10
00:25:41.583     23:59:12	-- common/autotest_common.sh@292 -- # export HUGEMEM=4096
00:25:41.583     23:59:12	-- common/autotest_common.sh@292 -- # HUGEMEM=4096
00:25:41.583     23:59:12	-- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:25:41.583     23:59:12	-- common/autotest_common.sh@299 -- # NO_HUGE=()
00:25:41.583     23:59:12	-- common/autotest_common.sh@300 -- # TEST_MODE=
00:25:41.583     23:59:12	-- common/autotest_common.sh@319 -- # [[ -z 132888 ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@319 -- # kill -0 132888
00:25:41.583     23:59:12	-- common/autotest_common.sh@1675 -- # set_test_storage 2147483648
00:25:41.583     23:59:12	-- common/autotest_common.sh@329 -- # [[ -v testdir ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@331 -- # local requested_size=2147483648
00:25:41.583     23:59:12	-- common/autotest_common.sh@332 -- # local mount target_dir
00:25:41.583     23:59:12	-- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses
00:25:41.583     23:59:12	-- common/autotest_common.sh@335 -- # local source fs size avail mount use
00:25:41.583     23:59:12	-- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates
00:25:41.583      23:59:12	-- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX
00:25:41.583     23:59:12	-- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.x2PmJF
00:25:41.583     23:59:12	-- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:25:41.583     23:59:12	-- common/autotest_common.sh@346 -- # [[ -n '' ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@351 -- # [[ -n '' ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.x2PmJF/tests/interrupt /tmp/spdk.x2PmJF
00:25:41.583     23:59:12	-- common/autotest_common.sh@359 -- # requested_size=2214592512
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583      23:59:12	-- common/autotest_common.sh@328 -- # df -T
00:25:41.583      23:59:12	-- common/autotest_common.sh@328 -- # grep -v Filesystem
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=4726784
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=ext4
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=10289332224
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=10310684672
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=6265810944
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6268403712
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=2592768
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=5242880
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=0
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=vfat
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=103061504
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=6334464
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=4096
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:25:41.583     23:59:12	-- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # avails["$mount"]=98637742080
00:25:41.583     23:59:12	-- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992
00:25:41.583     23:59:12	-- common/autotest_common.sh@364 -- # uses["$mount"]=1065037824
00:25:41.583     23:59:12	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:41.583     23:59:12	-- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n'
00:25:41.583  * Looking for test storage...
00:25:41.583     23:59:12	-- common/autotest_common.sh@369 -- # local target_space new_size
00:25:41.583     23:59:12	-- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}"
00:25:41.583      23:59:12	-- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.583      23:59:12	-- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}'
00:25:41.583     23:59:12	-- common/autotest_common.sh@373 -- # mount=/
00:25:41.583     23:59:12	-- common/autotest_common.sh@375 -- # target_space=10289332224
00:25:41.583     23:59:12	-- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size ))
00:25:41.583     23:59:12	-- common/autotest_common.sh@379 -- # (( target_space >= requested_size ))
00:25:41.583     23:59:12	-- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@381 -- # [[ / == / ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@382 -- # new_size=12525277184
00:25:41.583     23:59:12	-- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 ))
00:25:41.583     23:59:12	-- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.583     23:59:12	-- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.583     23:59:12	-- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.583  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:41.583     23:59:12	-- common/autotest_common.sh@390 -- # return 0
00:25:41.583     23:59:12	-- common/autotest_common.sh@1677 -- # set -o errtrace
00:25:41.583     23:59:12	-- common/autotest_common.sh@1678 -- # shopt -s extdebug
00:25:41.583     23:59:12	-- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:25:41.583     23:59:12	-- common/autotest_common.sh@1681 -- # PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:25:41.583     23:59:12	-- common/autotest_common.sh@1682 -- # true
00:25:41.583     23:59:12	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:25:41.583     23:59:12	-- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:25:41.583     23:59:12	-- common/autotest_common.sh@27 -- # exec
00:25:41.583     23:59:12	-- common/autotest_common.sh@29 -- # exec
00:25:41.583     23:59:12	-- common/autotest_common.sh@31 -- # xtrace_restore
00:25:41.583     23:59:12	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:25:41.583     23:59:12	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:25:41.583     23:59:12	-- common/autotest_common.sh@18 -- # set -x
00:25:41.583     23:59:12	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:41.583      23:59:12	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:41.583      23:59:12	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:41.583     23:59:12	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:41.583     23:59:12	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:41.583     23:59:12	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:41.583     23:59:12	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:41.583     23:59:12	-- scripts/common.sh@335 -- # IFS=.-:
00:25:41.583     23:59:12	-- scripts/common.sh@335 -- # read -ra ver1
00:25:41.583     23:59:12	-- scripts/common.sh@336 -- # IFS=.-:
00:25:41.583     23:59:12	-- scripts/common.sh@336 -- # read -ra ver2
00:25:41.583     23:59:12	-- scripts/common.sh@337 -- # local 'op=<'
00:25:41.583     23:59:12	-- scripts/common.sh@339 -- # ver1_l=2
00:25:41.583     23:59:12	-- scripts/common.sh@340 -- # ver2_l=1
00:25:41.584     23:59:12	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:41.584     23:59:12	-- scripts/common.sh@343 -- # case "$op" in
00:25:41.584     23:59:12	-- scripts/common.sh@344 -- # : 1
00:25:41.584     23:59:12	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:41.584     23:59:12	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:41.584      23:59:12	-- scripts/common.sh@364 -- # decimal 1
00:25:41.584      23:59:12	-- scripts/common.sh@352 -- # local d=1
00:25:41.584      23:59:12	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:41.584      23:59:12	-- scripts/common.sh@354 -- # echo 1
00:25:41.584     23:59:12	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:41.584      23:59:12	-- scripts/common.sh@365 -- # decimal 2
00:25:41.584      23:59:12	-- scripts/common.sh@352 -- # local d=2
00:25:41.584      23:59:12	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:41.584      23:59:12	-- scripts/common.sh@354 -- # echo 2
00:25:41.584     23:59:12	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:41.584     23:59:12	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:41.584     23:59:12	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:41.584     23:59:12	-- scripts/common.sh@367 -- # return 0
00:25:41.584     23:59:12	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:41.584     23:59:12	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:41.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.584  		--rc genhtml_branch_coverage=1
00:25:41.584  		--rc genhtml_function_coverage=1
00:25:41.584  		--rc genhtml_legend=1
00:25:41.584  		--rc geninfo_all_blocks=1
00:25:41.584  		--rc geninfo_unexecuted_blocks=1
00:25:41.584  		
00:25:41.584  		'
00:25:41.584     23:59:12	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:41.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.584  		--rc genhtml_branch_coverage=1
00:25:41.584  		--rc genhtml_function_coverage=1
00:25:41.584  		--rc genhtml_legend=1
00:25:41.584  		--rc geninfo_all_blocks=1
00:25:41.584  		--rc geninfo_unexecuted_blocks=1
00:25:41.584  		
00:25:41.584  		'
00:25:41.584     23:59:12	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:41.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.584  		--rc genhtml_branch_coverage=1
00:25:41.584  		--rc genhtml_function_coverage=1
00:25:41.584  		--rc genhtml_legend=1
00:25:41.584  		--rc geninfo_all_blocks=1
00:25:41.584  		--rc geninfo_unexecuted_blocks=1
00:25:41.584  		
00:25:41.584  		'
00:25:41.584     23:59:12	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:41.584  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:41.584  		--rc genhtml_branch_coverage=1
00:25:41.584  		--rc genhtml_function_coverage=1
00:25:41.584  		--rc genhtml_legend=1
00:25:41.584  		--rc geninfo_all_blocks=1
00:25:41.584  		--rc geninfo_unexecuted_blocks=1
00:25:41.584  		
00:25:41.584  		'
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07
00:25:41.584    23:59:12	-- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock
00:25:41.584   23:59:12	-- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:41.584   23:59:12	-- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:41.584   23:59:12	-- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=132950
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 132950 /var/tmp/spdk.sock
00:25:41.584   23:59:12	-- common/autotest_common.sh@829 -- # '[' -z 132950 ']'
00:25:41.584   23:59:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:41.584   23:59:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:41.584   23:59:12	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:25:41.584  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:41.584   23:59:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:41.584   23:59:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:41.584   23:59:12	-- common/autotest_common.sh@10 -- # set +x
00:25:41.584  [2024-12-13 23:59:12.237760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:41.584  [2024-12-13 23:59:12.237963] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132950 ]
00:25:41.842  [2024-12-13 23:59:12.416618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:42.101  [2024-12-13 23:59:12.610154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:42.101  [2024-12-13 23:59:12.610300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:42.101  [2024-12-13 23:59:12.610304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:42.359  [2024-12-13 23:59:12.888336] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:42.618   23:59:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:42.618   23:59:13	-- common/autotest_common.sh@862 -- # return 0
00:25:42.618    23:59:13	-- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers
00:25:42.618    23:59:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:42.618    23:59:13	-- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]'
00:25:42.618    23:59:13	-- common/autotest_common.sh@10 -- # set +x
00:25:42.618    23:59:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:42.618   23:59:13	-- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{
00:25:42.618    "name": "app_thread",
00:25:42.618    "id": 1,
00:25:42.618    "active_pollers": [],
00:25:42.618    "timed_pollers": [
00:25:42.618      {
00:25:42.618        "name": "rpc_subsystem_poll",
00:25:42.618        "id": 1,
00:25:42.618        "state": "waiting",
00:25:42.618        "run_count": 0,
00:25:42.618        "busy_count": 0,
00:25:42.618        "period_ticks": 8800000
00:25:42.618      }
00:25:42.618    ],
00:25:42.618    "paused_pollers": []
00:25:42.618  }'
00:25:42.618    23:59:13	-- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name'
00:25:42.618   23:59:13	-- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers=
00:25:42.618   23:59:13	-- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' '
00:25:42.618    23:59:13	-- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name'
00:25:42.877   23:59:13	-- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll
00:25:42.877   23:59:13	-- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio
00:25:42.877    23:59:13	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:42.877   23:59:13	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:42.877   23:59:13	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:42.877  5000+0 records in
00:25:42.877  5000+0 records out
00:25:42.877  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0169554 s, 604 MB/s
00:25:42.877   23:59:13	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:42.877  AIO0
00:25:43.136   23:59:13	-- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:25:43.394   23:59:13	-- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1
00:25:43.394    23:59:13	-- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers
00:25:43.394    23:59:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:43.394    23:59:13	-- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]'
00:25:43.394    23:59:13	-- common/autotest_common.sh@10 -- # set +x
00:25:43.394    23:59:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:43.394   23:59:14	-- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{
00:25:43.394    "name": "app_thread",
00:25:43.394    "id": 1,
00:25:43.394    "active_pollers": [],
00:25:43.394    "timed_pollers": [
00:25:43.394      {
00:25:43.394        "name": "rpc_subsystem_poll",
00:25:43.394        "id": 1,
00:25:43.394        "state": "waiting",
00:25:43.394        "run_count": 0,
00:25:43.394        "busy_count": 0,
00:25:43.394        "period_ticks": 8800000
00:25:43.394      }
00:25:43.394    ],
00:25:43.394    "paused_pollers": []
00:25:43.394  }'
00:25:43.394    23:59:14	-- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name'
00:25:43.394   23:59:14	-- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers=
00:25:43.394   23:59:14	-- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' '
00:25:43.395    23:59:14	-- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name'
00:25:43.653   23:59:14	-- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll
00:25:43.653   23:59:14	-- interrupt/reap_unregistered_poller.sh@44 -- # [[  rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]]
00:25:43.653   23:59:14	-- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:25:43.653   23:59:14	-- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 132950
00:25:43.653   23:59:14	-- common/autotest_common.sh@936 -- # '[' -z 132950 ']'
00:25:43.653   23:59:14	-- common/autotest_common.sh@940 -- # kill -0 132950
00:25:43.653    23:59:14	-- common/autotest_common.sh@941 -- # uname
00:25:43.653   23:59:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:43.653    23:59:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132950
00:25:43.653   23:59:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:43.654   23:59:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:43.654  killing process with pid 132950
00:25:43.654   23:59:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 132950'
00:25:43.654   23:59:14	-- common/autotest_common.sh@955 -- # kill 132950
00:25:43.654   23:59:14	-- common/autotest_common.sh@960 -- # wait 132950
00:25:44.588   23:59:15	-- interrupt/reap_unregistered_poller.sh@48 -- # cleanup
00:25:44.588   23:59:15	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:44.588  
00:25:44.588  real	0m3.515s
00:25:44.588  user	0m2.965s
00:25:44.588  sys	0m0.552s
00:25:44.588   23:59:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:44.588   23:59:15	-- common/autotest_common.sh@10 -- # set +x
00:25:44.588  ************************************
00:25:44.588  END TEST reap_unregistered_poller
00:25:44.588  ************************************
00:25:44.846    23:59:15	-- spdk/autotest.sh@191 -- # uname -s
00:25:44.846   23:59:15	-- spdk/autotest.sh@191 -- # [[ Linux == Linux ]]
00:25:44.846   23:59:15	-- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]]
00:25:44.846   23:59:15	-- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]]
00:25:44.846   23:59:15	-- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:25:44.846   23:59:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:44.846   23:59:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:44.846   23:59:15	-- common/autotest_common.sh@10 -- # set +x
00:25:44.846  ************************************
00:25:44.846  START TEST spdk_dd
00:25:44.846  ************************************
00:25:44.846   23:59:15	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:25:44.846  * Looking for test storage...
00:25:44.846  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:44.846     23:59:15	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:44.846      23:59:15	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:44.846      23:59:15	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:44.846     23:59:15	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:44.846     23:59:15	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:44.846     23:59:15	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:44.846     23:59:15	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:44.846     23:59:15	-- scripts/common.sh@335 -- # IFS=.-:
00:25:44.846     23:59:15	-- scripts/common.sh@335 -- # read -ra ver1
00:25:44.846     23:59:15	-- scripts/common.sh@336 -- # IFS=.-:
00:25:44.846     23:59:15	-- scripts/common.sh@336 -- # read -ra ver2
00:25:44.846     23:59:15	-- scripts/common.sh@337 -- # local 'op=<'
00:25:44.846     23:59:15	-- scripts/common.sh@339 -- # ver1_l=2
00:25:44.846     23:59:15	-- scripts/common.sh@340 -- # ver2_l=1
00:25:44.846     23:59:15	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:44.846     23:59:15	-- scripts/common.sh@343 -- # case "$op" in
00:25:44.846     23:59:15	-- scripts/common.sh@344 -- # : 1
00:25:44.846     23:59:15	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:44.846     23:59:15	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:44.846      23:59:15	-- scripts/common.sh@364 -- # decimal 1
00:25:44.846      23:59:15	-- scripts/common.sh@352 -- # local d=1
00:25:44.846      23:59:15	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:44.846      23:59:15	-- scripts/common.sh@354 -- # echo 1
00:25:44.846     23:59:15	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:44.846      23:59:15	-- scripts/common.sh@365 -- # decimal 2
00:25:44.846      23:59:15	-- scripts/common.sh@352 -- # local d=2
00:25:44.846      23:59:15	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:44.846      23:59:15	-- scripts/common.sh@354 -- # echo 2
00:25:44.846     23:59:15	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:44.846     23:59:15	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:44.846     23:59:15	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:44.846     23:59:15	-- scripts/common.sh@367 -- # return 0
00:25:44.846     23:59:15	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:44.846     23:59:15	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:44.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.846  		--rc genhtml_branch_coverage=1
00:25:44.846  		--rc genhtml_function_coverage=1
00:25:44.846  		--rc genhtml_legend=1
00:25:44.846  		--rc geninfo_all_blocks=1
00:25:44.846  		--rc geninfo_unexecuted_blocks=1
00:25:44.846  		
00:25:44.846  		'
00:25:44.846     23:59:15	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:44.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.846  		--rc genhtml_branch_coverage=1
00:25:44.846  		--rc genhtml_function_coverage=1
00:25:44.846  		--rc genhtml_legend=1
00:25:44.846  		--rc geninfo_all_blocks=1
00:25:44.846  		--rc geninfo_unexecuted_blocks=1
00:25:44.846  		
00:25:44.846  		'
00:25:44.846     23:59:15	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:44.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.847  		--rc genhtml_branch_coverage=1
00:25:44.847  		--rc genhtml_function_coverage=1
00:25:44.847  		--rc genhtml_legend=1
00:25:44.847  		--rc geninfo_all_blocks=1
00:25:44.847  		--rc geninfo_unexecuted_blocks=1
00:25:44.847  		
00:25:44.847  		'
00:25:44.847     23:59:15	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:44.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.847  		--rc genhtml_branch_coverage=1
00:25:44.847  		--rc genhtml_function_coverage=1
00:25:44.847  		--rc genhtml_legend=1
00:25:44.847  		--rc geninfo_all_blocks=1
00:25:44.847  		--rc geninfo_unexecuted_blocks=1
00:25:44.847  		
00:25:44.847  		'
00:25:44.847    23:59:15	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:44.847     23:59:15	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:44.847     23:59:15	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:44.847     23:59:15	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:44.847      23:59:15	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:44.847      23:59:15	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:44.847      23:59:15	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:44.847      23:59:15	-- paths/export.sh@5 -- # export PATH
00:25:44.847      23:59:15	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:44.847   23:59:15	-- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:25:45.494  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:25:45.494  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:25:46.453   23:59:16	-- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace))
00:25:46.453    23:59:16	-- dd/dd.sh@11 -- # nvme_in_userspace
00:25:46.453    23:59:16	-- scripts/common.sh@311 -- # local bdf bdfs
00:25:46.453    23:59:16	-- scripts/common.sh@312 -- # local nvmes
00:25:46.453    23:59:16	-- scripts/common.sh@314 -- # [[ -n '' ]]
00:25:46.453    23:59:16	-- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:25:46.453     23:59:16	-- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02
00:25:46.453     23:59:16	-- scripts/common.sh@297 -- # local bdf=
00:25:46.453      23:59:16	-- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02
00:25:46.453      23:59:16	-- scripts/common.sh@232 -- # local class
00:25:46.453      23:59:16	-- scripts/common.sh@233 -- # local subclass
00:25:46.453      23:59:16	-- scripts/common.sh@234 -- # local progif
00:25:46.453       23:59:16	-- scripts/common.sh@235 -- # printf %02x 1
00:25:46.453      23:59:16	-- scripts/common.sh@235 -- # class=01
00:25:46.453       23:59:16	-- scripts/common.sh@236 -- # printf %02x 8
00:25:46.453      23:59:16	-- scripts/common.sh@236 -- # subclass=08
00:25:46.453       23:59:16	-- scripts/common.sh@237 -- # printf %02x 2
00:25:46.453      23:59:16	-- scripts/common.sh@237 -- # progif=02
00:25:46.453      23:59:16	-- scripts/common.sh@239 -- # hash lspci
00:25:46.453      23:59:16	-- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']'
00:25:46.454      23:59:16	-- scripts/common.sh@241 -- # lspci -mm -n -D
00:25:46.454      23:59:16	-- scripts/common.sh@242 -- # grep -i -- -p02
00:25:46.454      23:59:16	-- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:25:46.454      23:59:16	-- scripts/common.sh@244 -- # tr -d '"'
00:25:46.454     23:59:16	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:25:46.454     23:59:16	-- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0
00:25:46.454     23:59:16	-- scripts/common.sh@15 -- # local i
00:25:46.454     23:59:16	-- scripts/common.sh@18 -- # [[    =~  0000:00:06.0  ]]
00:25:46.454     23:59:16	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:25:46.454     23:59:16	-- scripts/common.sh@24 -- # return 0
00:25:46.454     23:59:16	-- scripts/common.sh@301 -- # echo 0000:00:06.0
00:25:46.454    23:59:16	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:25:46.454    23:59:16	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]]
00:25:46.454     23:59:16	-- scripts/common.sh@322 -- # uname -s
00:25:46.454    23:59:16	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:25:46.454    23:59:16	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:25:46.454    23:59:16	-- scripts/common.sh@327 -- # (( 1 ))
00:25:46.454    23:59:16	-- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0
00:25:46.454   23:59:16	-- dd/dd.sh@13 -- # check_liburing
00:25:46.454   23:59:16	-- dd/common.sh@139 -- # local lib so
00:25:46.454   23:59:16	-- dd/common.sh@140 -- # local -g liburing_in_use=0
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454    23:59:16	-- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1
00:25:46.454    23:59:16	-- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]]
00:25:46.454   23:59:16	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:46.454   23:59:16	-- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 ))
00:25:46.454   23:59:16	-- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0
00:25:46.454   23:59:16	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:46.454   23:59:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:46.454   23:59:16	-- common/autotest_common.sh@10 -- # set +x
00:25:46.454  ************************************
00:25:46.454  START TEST spdk_dd_basic_rw
00:25:46.454  ************************************
00:25:46.454   23:59:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0
00:25:46.454  * Looking for test storage...
00:25:46.454  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:46.454     23:59:17	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:46.454      23:59:17	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:46.454      23:59:17	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:46.454     23:59:17	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:46.454     23:59:17	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:46.454     23:59:17	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:46.454     23:59:17	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:46.454     23:59:17	-- scripts/common.sh@335 -- # IFS=.-:
00:25:46.454     23:59:17	-- scripts/common.sh@335 -- # read -ra ver1
00:25:46.454     23:59:17	-- scripts/common.sh@336 -- # IFS=.-:
00:25:46.454     23:59:17	-- scripts/common.sh@336 -- # read -ra ver2
00:25:46.454     23:59:17	-- scripts/common.sh@337 -- # local 'op=<'
00:25:46.454     23:59:17	-- scripts/common.sh@339 -- # ver1_l=2
00:25:46.454     23:59:17	-- scripts/common.sh@340 -- # ver2_l=1
00:25:46.454     23:59:17	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:46.454     23:59:17	-- scripts/common.sh@343 -- # case "$op" in
00:25:46.454     23:59:17	-- scripts/common.sh@344 -- # : 1
00:25:46.454     23:59:17	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:46.454     23:59:17	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:46.454      23:59:17	-- scripts/common.sh@364 -- # decimal 1
00:25:46.454      23:59:17	-- scripts/common.sh@352 -- # local d=1
00:25:46.454      23:59:17	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:46.454      23:59:17	-- scripts/common.sh@354 -- # echo 1
00:25:46.454     23:59:17	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:46.454      23:59:17	-- scripts/common.sh@365 -- # decimal 2
00:25:46.454      23:59:17	-- scripts/common.sh@352 -- # local d=2
00:25:46.454      23:59:17	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:46.454      23:59:17	-- scripts/common.sh@354 -- # echo 2
00:25:46.454     23:59:17	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:46.454     23:59:17	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:46.454     23:59:17	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:46.454     23:59:17	-- scripts/common.sh@367 -- # return 0
00:25:46.454     23:59:17	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:46.454     23:59:17	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:46.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:46.454  		--rc genhtml_branch_coverage=1
00:25:46.454  		--rc genhtml_function_coverage=1
00:25:46.454  		--rc genhtml_legend=1
00:25:46.454  		--rc geninfo_all_blocks=1
00:25:46.454  		--rc geninfo_unexecuted_blocks=1
00:25:46.454  		
00:25:46.454  		'
00:25:46.454     23:59:17	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:46.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:46.454  		--rc genhtml_branch_coverage=1
00:25:46.454  		--rc genhtml_function_coverage=1
00:25:46.454  		--rc genhtml_legend=1
00:25:46.454  		--rc geninfo_all_blocks=1
00:25:46.454  		--rc geninfo_unexecuted_blocks=1
00:25:46.454  		
00:25:46.454  		'
00:25:46.454     23:59:17	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:46.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:46.454  		--rc genhtml_branch_coverage=1
00:25:46.454  		--rc genhtml_function_coverage=1
00:25:46.454  		--rc genhtml_legend=1
00:25:46.454  		--rc geninfo_all_blocks=1
00:25:46.454  		--rc geninfo_unexecuted_blocks=1
00:25:46.454  		
00:25:46.454  		'
00:25:46.454     23:59:17	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:46.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:46.454  		--rc genhtml_branch_coverage=1
00:25:46.454  		--rc genhtml_function_coverage=1
00:25:46.454  		--rc genhtml_legend=1
00:25:46.454  		--rc geninfo_all_blocks=1
00:25:46.454  		--rc geninfo_unexecuted_blocks=1
00:25:46.454  		
00:25:46.454  		'
00:25:46.454    23:59:17	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:46.454     23:59:17	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:46.454     23:59:17	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:46.454     23:59:17	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:46.454      23:59:17	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:46.454      23:59:17	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:46.455      23:59:17	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:46.455      23:59:17	-- paths/export.sh@5 -- # export PATH
00:25:46.455      23:59:17	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:46.455   23:59:17	-- dd/basic_rw.sh@80 -- # trap cleanup EXIT
00:25:46.455   23:59:17	-- dd/basic_rw.sh@82 -- # nvmes=("$@")
00:25:46.455   23:59:17	-- dd/basic_rw.sh@83 -- # nvme0=Nvme0
00:25:46.455   23:59:17	-- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0
00:25:46.455   23:59:17	-- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1
00:25:46.455   23:59:17	-- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie')
00:25:46.455   23:59:17	-- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0
00:25:46.455   23:59:17	-- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:46.455   23:59:17	-- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:46.455    23:59:17	-- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0
00:25:46.455    23:59:17	-- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id
00:25:46.455    23:59:17	-- dd/common.sh@126 -- # mapfile -t id
00:25:46.455     23:59:17	-- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0'
00:25:46.715    23:59:17	-- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             103 Data Units Written:          7 Host Read Commands:          2208 Host Write Commands:         111 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  =~ Current LBA Format: *LBA Format #([0-9]+) ]]
00:25:46.715    23:59:17	-- dd/common.sh@130 -- # lbaf=04
00:25:46.716    23:59:17	-- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             103 Data Units Written:          7 Host Read Commands:          2208 Host Write Commands:         111 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  =~ LBA Format #04: Data Size: *([0-9]+) ]]
00:25:46.716    23:59:17	-- dd/common.sh@132 -- # lbaf=4096
00:25:46.716    23:59:17	-- dd/common.sh@134 -- # echo 4096
00:25:46.975   23:59:17	-- dd/basic_rw.sh@93 -- # native_bs=4096
00:25:46.975    23:59:17	-- dd/basic_rw.sh@96 -- # :
00:25:46.975   23:59:17	-- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:46.975    23:59:17	-- dd/basic_rw.sh@96 -- # gen_conf
00:25:46.975   23:59:17	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:25:46.975    23:59:17	-- dd/common.sh@31 -- # xtrace_disable
00:25:46.975   23:59:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:46.975    23:59:17	-- common/autotest_common.sh@10 -- # set +x
00:25:46.975   23:59:17	-- common/autotest_common.sh@10 -- # set +x
00:25:46.975  ************************************
00:25:46.975  START TEST dd_bs_lt_native_bs
00:25:46.975  ************************************
00:25:46.975   23:59:17	-- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:46.975   23:59:17	-- common/autotest_common.sh@650 -- # local es=0
00:25:46.975   23:59:17	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:46.975   23:59:17	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.975   23:59:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.975    23:59:17	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.975   23:59:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.975    23:59:17	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.975   23:59:17	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.975   23:59:17	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.975   23:59:17	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:46.975   23:59:17	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:46.975  {
00:25:46.975    "subsystems": [
00:25:46.975      {
00:25:46.975        "subsystem": "bdev",
00:25:46.975        "config": [
00:25:46.975          {
00:25:46.975            "params": {
00:25:46.975              "trtype": "pcie",
00:25:46.975              "traddr": "0000:00:06.0",
00:25:46.975              "name": "Nvme0"
00:25:46.975            },
00:25:46.975            "method": "bdev_nvme_attach_controller"
00:25:46.975          },
00:25:46.975          {
00:25:46.975            "method": "bdev_wait_for_examine"
00:25:46.975          }
00:25:46.975        ]
00:25:46.975      }
00:25:46.975    ]
00:25:46.975  }
00:25:46.975  [2024-12-13 23:59:17.534276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:46.975  [2024-12-13 23:59:17.534477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133279 ]
00:25:46.975  [2024-12-13 23:59:17.705409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:47.234  [2024-12-13 23:59:17.965777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:47.800  [2024-12-13 23:59:18.324824] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size
00:25:47.800  [2024-12-13 23:59:18.324931] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:48.366  [2024-12-13 23:59:18.964085] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:48.625   23:59:19	-- common/autotest_common.sh@653 -- # es=234
00:25:48.625   23:59:19	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:48.625   23:59:19	-- common/autotest_common.sh@662 -- # es=106
00:25:48.625   23:59:19	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:48.625   23:59:19	-- common/autotest_common.sh@670 -- # es=1
00:25:48.625   23:59:19	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:48.625  
00:25:48.625  real	0m1.877s
00:25:48.625  user	0m1.550s
00:25:48.625  sys	0m0.287s
00:25:48.625   23:59:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:48.625   23:59:19	-- common/autotest_common.sh@10 -- # set +x
00:25:48.625  ************************************
00:25:48.625  END TEST dd_bs_lt_native_bs
00:25:48.625  ************************************
00:25:48.883   23:59:19	-- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096
00:25:48.883   23:59:19	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:48.883   23:59:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:48.883   23:59:19	-- common/autotest_common.sh@10 -- # set +x
00:25:48.883  ************************************
00:25:48.883  START TEST dd_rw
00:25:48.883  ************************************
00:25:48.883   23:59:19	-- common/autotest_common.sh@1114 -- # basic_rw 4096
00:25:48.883   23:59:19	-- dd/basic_rw.sh@11 -- # local native_bs=4096
00:25:48.883   23:59:19	-- dd/basic_rw.sh@12 -- # local count size
00:25:48.883   23:59:19	-- dd/basic_rw.sh@13 -- # local qds bss
00:25:48.883   23:59:19	-- dd/basic_rw.sh@15 -- # qds=(1 64)
00:25:48.883   23:59:19	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:48.883   23:59:19	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:48.883   23:59:19	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:48.883   23:59:19	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:48.883   23:59:19	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:48.883   23:59:19	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:48.883   23:59:19	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:25:48.883   23:59:19	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:48.883   23:59:19	-- dd/basic_rw.sh@23 -- # count=15
00:25:48.883   23:59:19	-- dd/basic_rw.sh@24 -- # count=15
00:25:48.883   23:59:19	-- dd/basic_rw.sh@25 -- # size=61440
00:25:48.883   23:59:19	-- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:25:48.883   23:59:19	-- dd/common.sh@98 -- # xtrace_disable
00:25:48.883   23:59:19	-- common/autotest_common.sh@10 -- # set +x
00:25:49.450   23:59:19	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62
00:25:49.450    23:59:19	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:49.450    23:59:19	-- dd/common.sh@31 -- # xtrace_disable
00:25:49.450    23:59:19	-- common/autotest_common.sh@10 -- # set +x
00:25:49.450  {
00:25:49.450    "subsystems": [
00:25:49.450      {
00:25:49.450        "subsystem": "bdev",
00:25:49.450        "config": [
00:25:49.450          {
00:25:49.450            "params": {
00:25:49.450              "trtype": "pcie",
00:25:49.450              "traddr": "0000:00:06.0",
00:25:49.450              "name": "Nvme0"
00:25:49.450            },
00:25:49.450            "method": "bdev_nvme_attach_controller"
00:25:49.450          },
00:25:49.450          {
00:25:49.450            "method": "bdev_wait_for_examine"
00:25:49.450          }
00:25:49.450        ]
00:25:49.450      }
00:25:49.450    ]
00:25:49.450  }
00:25:49.450  [2024-12-13 23:59:20.003289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:49.450  [2024-12-13 23:59:20.003486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133341 ]
00:25:49.450  [2024-12-13 23:59:20.171079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:49.709  [2024-12-13 23:59:20.360832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:50.275  
[2024-12-13T23:59:21.943Z] Copying: 60/60 [kB] (average 19 MBps)
00:25:51.211  
00:25:51.211   23:59:21	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62
00:25:51.211    23:59:21	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:51.211    23:59:21	-- dd/common.sh@31 -- # xtrace_disable
00:25:51.211    23:59:21	-- common/autotest_common.sh@10 -- # set +x
00:25:51.211  {
00:25:51.211    "subsystems": [
00:25:51.211      {
00:25:51.211        "subsystem": "bdev",
00:25:51.211        "config": [
00:25:51.211          {
00:25:51.211            "params": {
00:25:51.211              "trtype": "pcie",
00:25:51.211              "traddr": "0000:00:06.0",
00:25:51.211              "name": "Nvme0"
00:25:51.211            },
00:25:51.211            "method": "bdev_nvme_attach_controller"
00:25:51.211          },
00:25:51.211          {
00:25:51.211            "method": "bdev_wait_for_examine"
00:25:51.211          }
00:25:51.211        ]
00:25:51.211      }
00:25:51.211    ]
00:25:51.211  }
00:25:51.211  [2024-12-13 23:59:21.752056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:51.211  [2024-12-13 23:59:21.752242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133361 ]
00:25:51.211  [2024-12-13 23:59:21.918542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:51.469  [2024-12-13 23:59:22.107563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:52.035  
[2024-12-13T23:59:23.703Z] Copying: 60/60 [kB] (average 19 MBps)
00:25:52.971  
00:25:52.971   23:59:23	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:52.971   23:59:23	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:25:52.971   23:59:23	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:52.971   23:59:23	-- dd/common.sh@11 -- # local nvme_ref=
00:25:52.971   23:59:23	-- dd/common.sh@12 -- # local size=61440
00:25:52.971   23:59:23	-- dd/common.sh@14 -- # local bs=1048576
00:25:52.971   23:59:23	-- dd/common.sh@15 -- # local count=1
00:25:52.971   23:59:23	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:52.971    23:59:23	-- dd/common.sh@18 -- # gen_conf
00:25:52.971    23:59:23	-- dd/common.sh@31 -- # xtrace_disable
00:25:52.971    23:59:23	-- common/autotest_common.sh@10 -- # set +x
00:25:52.971  {
00:25:52.971    "subsystems": [
00:25:52.971      {
00:25:52.971        "subsystem": "bdev",
00:25:52.971        "config": [
00:25:52.971          {
00:25:52.971            "params": {
00:25:52.971              "trtype": "pcie",
00:25:52.971              "traddr": "0000:00:06.0",
00:25:52.971              "name": "Nvme0"
00:25:52.971            },
00:25:52.971            "method": "bdev_nvme_attach_controller"
00:25:52.971          },
00:25:52.971          {
00:25:52.971            "method": "bdev_wait_for_examine"
00:25:52.971          }
00:25:52.971        ]
00:25:52.971      }
00:25:52.971    ]
00:25:52.971  }
00:25:52.971  [2024-12-13 23:59:23.580733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:52.971  [2024-12-13 23:59:23.581560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133393 ]
00:25:53.230  [2024-12-13 23:59:23.747333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:53.230  [2024-12-13 23:59:23.936643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:53.797  
[2024-12-13T23:59:25.463Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:25:54.731  
00:25:54.731   23:59:25	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:54.731   23:59:25	-- dd/basic_rw.sh@23 -- # count=15
00:25:54.731   23:59:25	-- dd/basic_rw.sh@24 -- # count=15
00:25:54.731   23:59:25	-- dd/basic_rw.sh@25 -- # size=61440
00:25:54.731   23:59:25	-- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:25:54.731   23:59:25	-- dd/common.sh@98 -- # xtrace_disable
00:25:54.731   23:59:25	-- common/autotest_common.sh@10 -- # set +x
00:25:55.299   23:59:25	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62
00:25:55.299    23:59:25	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:55.299    23:59:25	-- dd/common.sh@31 -- # xtrace_disable
00:25:55.299    23:59:25	-- common/autotest_common.sh@10 -- # set +x
00:25:55.299  {
00:25:55.299    "subsystems": [
00:25:55.299      {
00:25:55.299        "subsystem": "bdev",
00:25:55.299        "config": [
00:25:55.299          {
00:25:55.299            "params": {
00:25:55.299              "trtype": "pcie",
00:25:55.299              "traddr": "0000:00:06.0",
00:25:55.299              "name": "Nvme0"
00:25:55.299            },
00:25:55.299            "method": "bdev_nvme_attach_controller"
00:25:55.299          },
00:25:55.299          {
00:25:55.299            "method": "bdev_wait_for_examine"
00:25:55.299          }
00:25:55.299        ]
00:25:55.299      }
00:25:55.299    ]
00:25:55.299  }
00:25:55.299  [2024-12-13 23:59:25.841938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:55.299  [2024-12-13 23:59:25.842163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133434 ]
00:25:55.299  [2024-12-13 23:59:26.006970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:55.557  [2024-12-13 23:59:26.186262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:55.816  
[2024-12-13T23:59:27.924Z] Copying: 60/60 [kB] (average 29 MBps)
00:25:57.192  
00:25:57.192   23:59:27	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62
00:25:57.192    23:59:27	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:57.192    23:59:27	-- dd/common.sh@31 -- # xtrace_disable
00:25:57.192    23:59:27	-- common/autotest_common.sh@10 -- # set +x
00:25:57.192  {
00:25:57.192    "subsystems": [
00:25:57.192      {
00:25:57.192        "subsystem": "bdev",
00:25:57.192        "config": [
00:25:57.192          {
00:25:57.192            "params": {
00:25:57.192              "trtype": "pcie",
00:25:57.192              "traddr": "0000:00:06.0",
00:25:57.192              "name": "Nvme0"
00:25:57.192            },
00:25:57.192            "method": "bdev_nvme_attach_controller"
00:25:57.192          },
00:25:57.192          {
00:25:57.192            "method": "bdev_wait_for_examine"
00:25:57.192          }
00:25:57.192        ]
00:25:57.192      }
00:25:57.192    ]
00:25:57.192  }
00:25:57.192  [2024-12-13 23:59:27.677331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:57.192  [2024-12-13 23:59:27.677518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133456 ]
00:25:57.192  [2024-12-13 23:59:27.851034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:57.451  [2024-12-13 23:59:28.040737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:57.709  
[2024-12-13T23:59:29.376Z] Copying: 60/60 [kB] (average 58 MBps)
00:25:58.644  
00:25:58.644   23:59:29	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:58.644   23:59:29	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:25:58.644   23:59:29	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:58.644   23:59:29	-- dd/common.sh@11 -- # local nvme_ref=
00:25:58.644   23:59:29	-- dd/common.sh@12 -- # local size=61440
00:25:58.644   23:59:29	-- dd/common.sh@14 -- # local bs=1048576
00:25:58.644   23:59:29	-- dd/common.sh@15 -- # local count=1
00:25:58.645   23:59:29	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:58.645    23:59:29	-- dd/common.sh@18 -- # gen_conf
00:25:58.645    23:59:29	-- dd/common.sh@31 -- # xtrace_disable
00:25:58.645    23:59:29	-- common/autotest_common.sh@10 -- # set +x
00:25:58.903  [2024-12-13 23:59:29.381374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:25:58.903  [2024-12-13 23:59:29.381535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133488 ]
00:25:58.903  {
00:25:58.903    "subsystems": [
00:25:58.903      {
00:25:58.903        "subsystem": "bdev",
00:25:58.903        "config": [
00:25:58.903          {
00:25:58.903            "params": {
00:25:58.903              "trtype": "pcie",
00:25:58.903              "traddr": "0000:00:06.0",
00:25:58.903              "name": "Nvme0"
00:25:58.903            },
00:25:58.903            "method": "bdev_nvme_attach_controller"
00:25:58.903          },
00:25:58.903          {
00:25:58.903            "method": "bdev_wait_for_examine"
00:25:58.903          }
00:25:58.903        ]
00:25:58.903      }
00:25:58.903    ]
00:25:58.903  }
00:25:58.903  [2024-12-13 23:59:29.534165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:59.161  [2024-12-13 23:59:29.694736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:59.420  
[2024-12-13T23:59:31.088Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:26:00.356  
00:26:00.356   23:59:30	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:26:00.356   23:59:30	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:26:00.356   23:59:30	-- dd/basic_rw.sh@23 -- # count=7
00:26:00.356   23:59:30	-- dd/basic_rw.sh@24 -- # count=7
00:26:00.356   23:59:30	-- dd/basic_rw.sh@25 -- # size=57344
00:26:00.356   23:59:30	-- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:26:00.356   23:59:30	-- dd/common.sh@98 -- # xtrace_disable
00:26:00.356   23:59:30	-- common/autotest_common.sh@10 -- # set +x
00:26:00.923   23:59:31	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62
00:26:00.923    23:59:31	-- dd/basic_rw.sh@30 -- # gen_conf
00:26:00.923    23:59:31	-- dd/common.sh@31 -- # xtrace_disable
00:26:00.923    23:59:31	-- common/autotest_common.sh@10 -- # set +x
00:26:00.923  {
00:26:00.923    "subsystems": [
00:26:00.923      {
00:26:00.923        "subsystem": "bdev",
00:26:00.923        "config": [
00:26:00.923          {
00:26:00.923            "params": {
00:26:00.923              "trtype": "pcie",
00:26:00.923              "traddr": "0000:00:06.0",
00:26:00.923              "name": "Nvme0"
00:26:00.923            },
00:26:00.923            "method": "bdev_nvme_attach_controller"
00:26:00.923          },
00:26:00.923          {
00:26:00.923            "method": "bdev_wait_for_examine"
00:26:00.923          }
00:26:00.923        ]
00:26:00.923      }
00:26:00.923    ]
00:26:00.923  }
00:26:00.923  [2024-12-13 23:59:31.542366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:00.923  [2024-12-13 23:59:31.542566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133516 ]
00:26:01.182  [2024-12-13 23:59:31.710668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:01.182  [2024-12-13 23:59:31.895424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:01.749  
[2024-12-13T23:59:33.416Z] Copying: 56/56 [kB] (average 54 MBps)
00:26:02.684  
00:26:02.684   23:59:33	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62
00:26:02.684    23:59:33	-- dd/basic_rw.sh@37 -- # gen_conf
00:26:02.684    23:59:33	-- dd/common.sh@31 -- # xtrace_disable
00:26:02.684    23:59:33	-- common/autotest_common.sh@10 -- # set +x
00:26:02.684  [2024-12-13 23:59:33.280723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:02.684  [2024-12-13 23:59:33.280879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133548 ]
00:26:02.684  {
00:26:02.684    "subsystems": [
00:26:02.684      {
00:26:02.684        "subsystem": "bdev",
00:26:02.684        "config": [
00:26:02.684          {
00:26:02.684            "params": {
00:26:02.684              "trtype": "pcie",
00:26:02.684              "traddr": "0000:00:06.0",
00:26:02.684              "name": "Nvme0"
00:26:02.684            },
00:26:02.684            "method": "bdev_nvme_attach_controller"
00:26:02.684          },
00:26:02.684          {
00:26:02.684            "method": "bdev_wait_for_examine"
00:26:02.684          }
00:26:02.684        ]
00:26:02.684      }
00:26:02.684    ]
00:26:02.684  }
00:26:02.943  [2024-12-13 23:59:33.433249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:02.943  [2024-12-13 23:59:33.612081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:03.510  
[2024-12-13T23:59:35.179Z] Copying: 56/56 [kB] (average 54 MBps)
00:26:04.447  
00:26:04.447   23:59:34	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:04.447   23:59:34	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:26:04.447   23:59:34	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:04.447   23:59:34	-- dd/common.sh@11 -- # local nvme_ref=
00:26:04.447   23:59:34	-- dd/common.sh@12 -- # local size=57344
00:26:04.447   23:59:34	-- dd/common.sh@14 -- # local bs=1048576
00:26:04.447   23:59:34	-- dd/common.sh@15 -- # local count=1
00:26:04.447   23:59:34	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:26:04.447    23:59:34	-- dd/common.sh@18 -- # gen_conf
00:26:04.447    23:59:34	-- dd/common.sh@31 -- # xtrace_disable
00:26:04.447    23:59:34	-- common/autotest_common.sh@10 -- # set +x
00:26:04.447  {
00:26:04.447    "subsystems": [
00:26:04.447      {
00:26:04.447        "subsystem": "bdev",
00:26:04.447        "config": [
00:26:04.447          {
00:26:04.447            "params": {
00:26:04.447              "trtype": "pcie",
00:26:04.447              "traddr": "0000:00:06.0",
00:26:04.447              "name": "Nvme0"
00:26:04.447            },
00:26:04.447            "method": "bdev_nvme_attach_controller"
00:26:04.447          },
00:26:04.447          {
00:26:04.447            "method": "bdev_wait_for_examine"
00:26:04.447          }
00:26:04.447        ]
00:26:04.447      }
00:26:04.447    ]
00:26:04.447  }
00:26:04.447  [2024-12-13 23:59:35.028034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:04.447  [2024-12-13 23:59:35.028378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133578 ]
00:26:04.706  [2024-12-13 23:59:35.194939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:04.706  [2024-12-13 23:59:35.367105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:05.271  
[2024-12-13T23:59:36.571Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:26:05.839  
00:26:06.097   23:59:36	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:26:06.097   23:59:36	-- dd/basic_rw.sh@23 -- # count=7
00:26:06.097   23:59:36	-- dd/basic_rw.sh@24 -- # count=7
00:26:06.097   23:59:36	-- dd/basic_rw.sh@25 -- # size=57344
00:26:06.097   23:59:36	-- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:26:06.097   23:59:36	-- dd/common.sh@98 -- # xtrace_disable
00:26:06.097   23:59:36	-- common/autotest_common.sh@10 -- # set +x
00:26:06.395   23:59:37	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62
00:26:06.395    23:59:37	-- dd/basic_rw.sh@30 -- # gen_conf
00:26:06.395    23:59:37	-- dd/common.sh@31 -- # xtrace_disable
00:26:06.395    23:59:37	-- common/autotest_common.sh@10 -- # set +x
00:26:06.669  {
00:26:06.669    "subsystems": [
00:26:06.669      {
00:26:06.669        "subsystem": "bdev",
00:26:06.669        "config": [
00:26:06.669          {
00:26:06.669            "params": {
00:26:06.669              "trtype": "pcie",
00:26:06.669              "traddr": "0000:00:06.0",
00:26:06.669              "name": "Nvme0"
00:26:06.669            },
00:26:06.669            "method": "bdev_nvme_attach_controller"
00:26:06.669          },
00:26:06.669          {
00:26:06.669            "method": "bdev_wait_for_examine"
00:26:06.669          }
00:26:06.669        ]
00:26:06.669      }
00:26:06.669    ]
00:26:06.669  }
00:26:06.669  [2024-12-13 23:59:37.110124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:06.669  [2024-12-13 23:59:37.110298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133610 ]
00:26:06.669  [2024-12-13 23:59:37.278024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:06.947  [2024-12-13 23:59:37.454362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:07.206  
[2024-12-13T23:59:38.873Z] Copying: 56/56 [kB] (average 54 MBps)
00:26:08.141  
00:26:08.141   23:59:38	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62
00:26:08.141    23:59:38	-- dd/basic_rw.sh@37 -- # gen_conf
00:26:08.141    23:59:38	-- dd/common.sh@31 -- # xtrace_disable
00:26:08.141    23:59:38	-- common/autotest_common.sh@10 -- # set +x
00:26:08.141  {
00:26:08.141    "subsystems": [
00:26:08.141      {
00:26:08.141        "subsystem": "bdev",
00:26:08.141        "config": [
00:26:08.141          {
00:26:08.141            "params": {
00:26:08.141              "trtype": "pcie",
00:26:08.141              "traddr": "0000:00:06.0",
00:26:08.141              "name": "Nvme0"
00:26:08.141            },
00:26:08.141            "method": "bdev_nvme_attach_controller"
00:26:08.141          },
00:26:08.141          {
00:26:08.141            "method": "bdev_wait_for_examine"
00:26:08.141          }
00:26:08.141        ]
00:26:08.141      }
00:26:08.141    ]
00:26:08.141  }
00:26:08.141  [2024-12-13 23:59:38.815677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:08.141  [2024-12-13 23:59:38.815878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133630 ]
00:26:08.400  [2024-12-13 23:59:38.982678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:08.676  [2024-12-13 23:59:39.171471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:08.935  
[2024-12-13T23:59:40.606Z] Copying: 56/56 [kB] (average 54 MBps)
00:26:09.874  
00:26:09.874   23:59:40	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:09.874   23:59:40	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:26:09.874   23:59:40	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:09.874   23:59:40	-- dd/common.sh@11 -- # local nvme_ref=
00:26:09.874   23:59:40	-- dd/common.sh@12 -- # local size=57344
00:26:09.874   23:59:40	-- dd/common.sh@14 -- # local bs=1048576
00:26:09.874   23:59:40	-- dd/common.sh@15 -- # local count=1
00:26:09.874   23:59:40	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:26:09.874    23:59:40	-- dd/common.sh@18 -- # gen_conf
00:26:09.874    23:59:40	-- dd/common.sh@31 -- # xtrace_disable
00:26:09.874    23:59:40	-- common/autotest_common.sh@10 -- # set +x
00:26:09.874  {
00:26:09.874    "subsystems": [
00:26:09.874      {
00:26:09.874        "subsystem": "bdev",
00:26:09.874        "config": [
00:26:09.874          {
00:26:09.874            "params": {
00:26:09.874              "trtype": "pcie",
00:26:09.874              "traddr": "0000:00:06.0",
00:26:09.874              "name": "Nvme0"
00:26:09.874            },
00:26:09.874            "method": "bdev_nvme_attach_controller"
00:26:09.874          },
00:26:09.874          {
00:26:09.874            "method": "bdev_wait_for_examine"
00:26:09.874          }
00:26:09.874        ]
00:26:09.874      }
00:26:09.874    ]
00:26:09.874  }
00:26:09.874  [2024-12-13 23:59:40.459811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:09.874  [2024-12-13 23:59:40.460074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133662 ]
00:26:10.134  [2024-12-13 23:59:40.627912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:10.134  [2024-12-13 23:59:40.792573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:10.392  
[2024-12-13T23:59:42.062Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:26:11.330  
00:26:11.588   23:59:42	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:26:11.588   23:59:42	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:26:11.588   23:59:42	-- dd/basic_rw.sh@23 -- # count=3
00:26:11.588   23:59:42	-- dd/basic_rw.sh@24 -- # count=3
00:26:11.588   23:59:42	-- dd/basic_rw.sh@25 -- # size=49152
00:26:11.588   23:59:42	-- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:26:11.588   23:59:42	-- dd/common.sh@98 -- # xtrace_disable
00:26:11.588   23:59:42	-- common/autotest_common.sh@10 -- # set +x
00:26:11.848   23:59:42	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62
00:26:11.848    23:59:42	-- dd/basic_rw.sh@30 -- # gen_conf
00:26:11.848    23:59:42	-- dd/common.sh@31 -- # xtrace_disable
00:26:11.848    23:59:42	-- common/autotest_common.sh@10 -- # set +x
00:26:11.848  {
00:26:11.848    "subsystems": [
00:26:11.848      {
00:26:11.848        "subsystem": "bdev",
00:26:11.848        "config": [
00:26:11.848          {
00:26:11.848            "params": {
00:26:11.848              "trtype": "pcie",
00:26:11.848              "traddr": "0000:00:06.0",
00:26:11.848              "name": "Nvme0"
00:26:11.848            },
00:26:11.848            "method": "bdev_nvme_attach_controller"
00:26:11.848          },
00:26:11.848          {
00:26:11.848            "method": "bdev_wait_for_examine"
00:26:11.848          }
00:26:11.848        ]
00:26:11.848      }
00:26:11.848    ]
00:26:11.848  }
00:26:11.848  [2024-12-13 23:59:42.563579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:11.848  [2024-12-13 23:59:42.563789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133690 ]
00:26:12.106  [2024-12-13 23:59:42.731391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:12.364  [2024-12-13 23:59:42.922397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:12.622  
[2024-12-13T23:59:44.289Z] Copying: 48/48 [kB] (average 46 MBps)
00:26:13.557  
00:26:13.557    23:59:44	-- dd/basic_rw.sh@37 -- # gen_conf
00:26:13.557   23:59:44	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62
00:26:13.557    23:59:44	-- dd/common.sh@31 -- # xtrace_disable
00:26:13.557    23:59:44	-- common/autotest_common.sh@10 -- # set +x
00:26:13.557  {
00:26:13.557    "subsystems": [
00:26:13.557      {
00:26:13.557        "subsystem": "bdev",
00:26:13.557        "config": [
00:26:13.557          {
00:26:13.557            "params": {
00:26:13.557              "trtype": "pcie",
00:26:13.557              "traddr": "0000:00:06.0",
00:26:13.557              "name": "Nvme0"
00:26:13.557            },
00:26:13.557            "method": "bdev_nvme_attach_controller"
00:26:13.557          },
00:26:13.557          {
00:26:13.557            "method": "bdev_wait_for_examine"
00:26:13.557          }
00:26:13.557        ]
00:26:13.557      }
00:26:13.557    ]
00:26:13.557  }
00:26:13.557  [2024-12-13 23:59:44.187280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:13.557  [2024-12-13 23:59:44.187464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133720 ]
00:26:13.816  [2024-12-13 23:59:44.354471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:13.816  [2024-12-13 23:59:44.514501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:14.384  
[2024-12-13T23:59:46.062Z] Copying: 48/48 [kB] (average 46 MBps)
00:26:15.330  
00:26:15.330   23:59:45	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:15.330   23:59:45	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:26:15.330   23:59:45	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:15.330   23:59:45	-- dd/common.sh@11 -- # local nvme_ref=
00:26:15.330   23:59:45	-- dd/common.sh@12 -- # local size=49152
00:26:15.330   23:59:45	-- dd/common.sh@14 -- # local bs=1048576
00:26:15.330   23:59:45	-- dd/common.sh@15 -- # local count=1
00:26:15.330    23:59:45	-- dd/common.sh@18 -- # gen_conf
00:26:15.330   23:59:45	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:26:15.330    23:59:45	-- dd/common.sh@31 -- # xtrace_disable
00:26:15.330    23:59:45	-- common/autotest_common.sh@10 -- # set +x
00:26:15.330  {
00:26:15.330    "subsystems": [
00:26:15.330      {
00:26:15.330        "subsystem": "bdev",
00:26:15.330        "config": [
00:26:15.330          {
00:26:15.330            "params": {
00:26:15.330              "trtype": "pcie",
00:26:15.330              "traddr": "0000:00:06.0",
00:26:15.330              "name": "Nvme0"
00:26:15.330            },
00:26:15.330            "method": "bdev_nvme_attach_controller"
00:26:15.330          },
00:26:15.330          {
00:26:15.330            "method": "bdev_wait_for_examine"
00:26:15.330          }
00:26:15.330        ]
00:26:15.330      }
00:26:15.330    ]
00:26:15.330  }
00:26:15.330  [2024-12-13 23:59:46.039809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:15.330  [2024-12-13 23:59:46.040033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133745 ]
00:26:15.588  [2024-12-13 23:59:46.207503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:15.846  [2024-12-13 23:59:46.392747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:16.104  
[2024-12-13T23:59:47.773Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:26:17.041  
00:26:17.041   23:59:47	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:26:17.041   23:59:47	-- dd/basic_rw.sh@23 -- # count=3
00:26:17.041   23:59:47	-- dd/basic_rw.sh@24 -- # count=3
00:26:17.041   23:59:47	-- dd/basic_rw.sh@25 -- # size=49152
00:26:17.041   23:59:47	-- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:26:17.041   23:59:47	-- dd/common.sh@98 -- # xtrace_disable
00:26:17.041   23:59:47	-- common/autotest_common.sh@10 -- # set +x
00:26:17.608   23:59:48	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62
00:26:17.608    23:59:48	-- dd/basic_rw.sh@30 -- # gen_conf
00:26:17.608    23:59:48	-- dd/common.sh@31 -- # xtrace_disable
00:26:17.608    23:59:48	-- common/autotest_common.sh@10 -- # set +x
00:26:17.608  {
00:26:17.608    "subsystems": [
00:26:17.608      {
00:26:17.608        "subsystem": "bdev",
00:26:17.608        "config": [
00:26:17.608          {
00:26:17.608            "params": {
00:26:17.608              "trtype": "pcie",
00:26:17.608              "traddr": "0000:00:06.0",
00:26:17.608              "name": "Nvme0"
00:26:17.608            },
00:26:17.608            "method": "bdev_nvme_attach_controller"
00:26:17.608          },
00:26:17.608          {
00:26:17.608            "method": "bdev_wait_for_examine"
00:26:17.608          }
00:26:17.608        ]
00:26:17.608      }
00:26:17.608    ]
00:26:17.608  }
00:26:17.608  [2024-12-13 23:59:48.122970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:17.608  [2024-12-13 23:59:48.123165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133777 ]
00:26:17.608  [2024-12-13 23:59:48.291086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:17.867  [2024-12-13 23:59:48.464518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:18.125  
[2024-12-13T23:59:49.794Z] Copying: 48/48 [kB] (average 46 MBps)
00:26:19.062  
00:26:19.062   23:59:49	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62
00:26:19.062    23:59:49	-- dd/basic_rw.sh@37 -- # gen_conf
00:26:19.062    23:59:49	-- dd/common.sh@31 -- # xtrace_disable
00:26:19.062    23:59:49	-- common/autotest_common.sh@10 -- # set +x
00:26:19.322  [2024-12-13 23:59:49.796240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:19.322  [2024-12-13 23:59:49.796974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133804 ]
00:26:19.322  {
00:26:19.322    "subsystems": [
00:26:19.322      {
00:26:19.322        "subsystem": "bdev",
00:26:19.322        "config": [
00:26:19.322          {
00:26:19.322            "params": {
00:26:19.322              "trtype": "pcie",
00:26:19.322              "traddr": "0000:00:06.0",
00:26:19.322              "name": "Nvme0"
00:26:19.322            },
00:26:19.322            "method": "bdev_nvme_attach_controller"
00:26:19.322          },
00:26:19.322          {
00:26:19.322            "method": "bdev_wait_for_examine"
00:26:19.322          }
00:26:19.322        ]
00:26:19.322      }
00:26:19.322    ]
00:26:19.322  }
00:26:19.322  [2024-12-13 23:59:49.951007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:19.582  [2024-12-13 23:59:50.116501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:19.842  
[2024-12-13T23:59:51.512Z] Copying: 48/48 [kB] (average 46 MBps)
00:26:20.780  
00:26:20.780   23:59:51	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:20.780   23:59:51	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:26:20.780   23:59:51	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:20.780   23:59:51	-- dd/common.sh@11 -- # local nvme_ref=
00:26:20.781   23:59:51	-- dd/common.sh@12 -- # local size=49152
00:26:20.781   23:59:51	-- dd/common.sh@14 -- # local bs=1048576
00:26:20.781   23:59:51	-- dd/common.sh@15 -- # local count=1
00:26:20.781    23:59:51	-- dd/common.sh@18 -- # gen_conf
00:26:20.781   23:59:51	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:26:20.781    23:59:51	-- dd/common.sh@31 -- # xtrace_disable
00:26:20.781    23:59:51	-- common/autotest_common.sh@10 -- # set +x
00:26:20.781  {
00:26:20.781    "subsystems": [
00:26:20.781      {
00:26:20.781        "subsystem": "bdev",
00:26:20.781        "config": [
00:26:20.781          {
00:26:20.781            "params": {
00:26:20.781              "trtype": "pcie",
00:26:20.781              "traddr": "0000:00:06.0",
00:26:20.781              "name": "Nvme0"
00:26:20.781            },
00:26:20.781            "method": "bdev_nvme_attach_controller"
00:26:20.781          },
00:26:20.781          {
00:26:20.781            "method": "bdev_wait_for_examine"
00:26:20.781          }
00:26:20.781        ]
00:26:20.781      }
00:26:20.781    ]
00:26:20.781  }
00:26:20.781  [2024-12-13 23:59:51.392007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:20.781  [2024-12-13 23:59:51.392392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133832 ]
00:26:21.048  [2024-12-13 23:59:51.558569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:21.048  [2024-12-13 23:59:51.719503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:21.617  
[2024-12-13T23:59:53.288Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:26:22.556  
00:26:22.556  ************************************
00:26:22.556  END TEST dd_rw
00:26:22.556  ************************************
00:26:22.556  
00:26:22.556  real	0m33.627s
00:26:22.556  user	0m27.517s
00:26:22.556  sys	0m4.777s
00:26:22.556   23:59:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:22.556   23:59:53	-- common/autotest_common.sh@10 -- # set +x
00:26:22.556   23:59:53	-- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset
00:26:22.556   23:59:53	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:22.556   23:59:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:22.556   23:59:53	-- common/autotest_common.sh@10 -- # set +x
00:26:22.556  ************************************
00:26:22.556  START TEST dd_rw_offset
00:26:22.556  ************************************
00:26:22.556   23:59:53	-- common/autotest_common.sh@1114 -- # basic_offset
00:26:22.556   23:59:53	-- dd/basic_rw.sh@52 -- # local count seek skip data data_check
00:26:22.556   23:59:53	-- dd/basic_rw.sh@54 -- # gen_bytes 4096
00:26:22.556   23:59:53	-- dd/common.sh@98 -- # xtrace_disable
00:26:22.556   23:59:53	-- common/autotest_common.sh@10 -- # set +x
00:26:22.556   23:59:53	-- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 ))
00:26:22.557   23:59:53	-- dd/basic_rw.sh@56 -- # data=mon1ruv7g4pwryxzy51cd1ylixwyjo89l2xqms43wxmlamsle49nj1zl6ykocv4okmltm61e9y2le0wuokkaqzpld0iinj3digjame6ugukwq66ngtlxzh5ipg4f7turikx2eajfkavxcpcga0iyg5ksm732fqg1x8to56qm10aimtr2nfd619yiyxrs2pej0til3tfv4ehgqpktuyaty12luw988t6dry88t4m2cwy5thhdyc4ea7q6d95rz49fi2s5axq3iez6kfa3wch7r04p6p4j0vfi27jcz0biqwk7v57wuv3uzvjwj5qmv2l5luwqcf9k595pu531tu3s8vu4g5x1gvsaf7qubbyyvu38pi4xtwlghpy3kpfes0w0ycn3ytm6jly5y7v9h4izyr4k8sjs85n4n3xlxf6nedipuo801vig07r6wy6wjc17b6d2spsm3xfli6pqr5ywlgartexdzijyi2be8ct1wytttlfekks6unn0ejaktng0l8ndftdz8xxwpapw1neqt05pzx85wd7fubh8w5csklhtpyjbkw9ocujvpuiwcsegnhxrlxczr1zosyseogrninkk600b768qhfaigl0buvheih9qdfrdlbb2n0nlj97lw58mkpoz0f75o9n8ul2knbdmtp3ptw9wx0eoozq5xv34xw6q5i533629trz1z6uok9vwkjrqvcynzm90k3mnoh18294jkklcbug3qpcm7373yxc2gexjqaj4lxqas5tf7ahdz86lgksanae2ce74mupoazgsl2b6kttp9dlw1oxlcd1ou6meur214b61kx5aoczdmj0lgh8nxob2y84zsihfbo08rc7nhmq8a5uluqg3dgbw9k0ojkb0ojml0m7ykclb00xbk4zhudtdo3zbvyh8slhxi7d6imxopmoszu7rs3ntah74287ryo57g1xx7h04vivi8f8s3u3bbz3b4fjq0hzbp2ggc21qc2i8in7tjalg70b7pavpvpuxecvn428vuty0cqe28pod0otp9xz4t3tgs8zqgu9e9mmu600csgh66mc9tcl4ahx5eys6o2pp2a0xbaphmaucau8zu1iup6lsxoiymd7w31ypw45cori9q4dlbwfe8gyerbyg4d8kf6vcqtgcfc95314o26wr6m4almvhfn48c8n6kw0vzagxm300mw433ne64zomu2x2p2rqip6mnpk4xh9a6amkc3efwwxei4pxfog3b2k2owp0rdhunbuygn48d5zgy5l5t0yzp0jwxqqmhw1bu6tuagcurh4trakxcgms3qmypfaax53tquncv0xslwa9y8xzm2ez23vhxt6ft0gupc8jtkk2rlby1boybr7ew1jr9ejvquy4g5bu4xz4s9vhab8gppo6l9uqvkdc6apcgb1wda1wj26at9v9a2chijavip9rtg4j0ohkug8zgwxc5ui2regvrhfttafxi5h5ivrybltb03v42rno1peagchvp8n32l1m9s5068mnr00uwyhyixdl54nqdcxcc3gc0n70dxhdiieuxk85f3iocgkqclvswdjxwbfk1m96jjehd1yom25x249izn0sp4065ot9i7wbm2ejxg7cvwm988fh4qb5a3ahc3l2ri8sddn6hf8tw57jjrv95jbotofphygmwaob4kn1aotbpch0c17ga10qt5xam5byeupj6sw958iziqo3sctbvl3v82fy15ighafiv1q1ehmhvza4of5uqyfc3lna0or4rsn4h45e4cpv0vsozj32ioekedck18u2n284avr3065r8qaiyt0qferhw0vpu7fa34h5q72wnjzi5mp5ipq4uwn6645flh7cgh13je3cilzagokf9pb49nzd9xb9490jyk2uniyceqxlluaj0ji1yuwi9lblx75fx5tkylx4wj617w4ypngar900jfxpoi1etfniz0cfh9hxckne4ndj76gzp09m0o4sn3a7i2oyx0m2t8qxn0l17kd1romaqz80x3fddj3y6zfqxvxc3iji2x676d7si7mxqfk3ibeqkstwmjzs7clmr1keb8efm0d4eb5dvwzfpx57f25vfyjkztfxkbkv3z2nv9a5vpdvxsvxw04kvqmqg2ygyced2ou99lp00ftcve2pbt4v15bxet6bhrmiyqmwosad7kh5gpqd2knjfboq30d9fyp2jz9jmqilq197peyrma6mpd4sm7igg004r6xen2o3c5r4y3o4njgdk8lubek4gcf1259gdjbaad392dug4h6vsyz7xkotf34uu3gx3td3c27sv4s5fojc103qy889az1gkuq4myx5l2yqnuo63cj8xw0i9abpp8wezt2sispuqa9snphc95lsq2v8lrt2m48wsxuvvdb4wai7m7c7au82cn4wkqdco6xybi61y3xn0ip7fmmgm7599o73uh4wbnt00mj4vlaxu07ufziw3cbn3608v1rzxg3c45r0bkp9k8re9f4ic9vx6z88ky5e1eldnjwfscsol7e1cwazr1a56bbgd6hv4rxpgw9j67c4qutvy32cawc7vmxbg0a6vtk3nmk46q4uuqd020t431q4o7gxl4vnoouu5iqtve8zml3gkk1vdobx9v1mk0owy21uqfj32zyeqijjvh3cqdjg8dsh4j5nj6ro2jttqpwgr8qvljkku0x4xbb5jlicgjcq5d6fo77c5srt6budhqz3j5oh1x38h6z2cd42zflvhb0jpjuox6igeu70uj6evi3o5hb0ez3nrh8xxi0wzwcdvwslsx606w8qhki0c5m6v8mgeyi0wexgas87a5p1uwh1uje9npr3mxsxlxi8n9ldwhsqcwluo1vsvrmrmpelvtpua5cseyt8iqo07a3d4ghm4wn9yvlg4v9u5x1yx2nfapw4ead2qkvhyhm7eceg6sidkpgu3dgcfp46obwolbt56ytpqdas6xsudg8150i3rdev3mpwci4etk6lnkcqdm16vt923e35aoymrw5gl80ke4lw7nctjnqljqezzalqltmp2vtpqm1d12fm2jvicryyugmzwagexpdyyi8eceh24mmyj630dz9mkjgf04xcss80s48vqow4j9tom6l6z26fr9tgbytgiefbzbzesu41kublqkn698tcdpt41z9pcmv12veuzef54pdf8lujdrplpb5hpwk55zx6cdkgh8fcqkdbog59cjujryti8p4vs1bk72nvzhxnsynyv80gnl2daz4fdafh62l12xsf2xrc9o91u9se2pea39n5ktniy6emhlkifj04njbfadgl1xm19gwpxuzi3og9iok775k9lwnykqr7rh42digo3nvrekcrnzfwo93smp3kz1g5ubf9uigi4n9mcv3okfgh8zuazrkxdsmiq0z4kphah8fzbnlenmx2nv0d09eyul34xpahkikmb4rji3xle61qsyk0a4dwemazkz47izqzs6vher6x5ur78tpyrddkefk08fcl2ax4oc0ng5lz5r8zotcorsskr6fklvnzb3w9j4z2963gtpdamtnchw0zbodf4lojsn81xhbmkhtvttr0atmw89m8ap8fzau3qw0w42fnvawesgpsbxq24tebtdzt3ktvdbot7d6vdket2njx97xj3ehhzc7daraovtbjxux8m14b3cdmjeelo1ajcfknb4xv343tw0aaub1h18dlw60g37kkj78mc5lk5lmgas76etht75ez5a3wnh51w7um480xaq8tsai9tr561wppl8391n4d9oay4vkwp2nxroq4fj9j7m8qwupzq52nof7ge4flx5zkt9sl9o1guuz4kwlq05aqb3x37pkgoq5pc35r9hvvxg7ak0a6n7m5o9z9cnw6ld5y1l0x9v0tdp82ruec03c1301hswd5e9w7drdqyjsirg3vaauqy96bvmbkqbmma7uvbqg1uq5ymt0ncqmqi0uq5xeabl0n4jq8hzpvdytzokeab7cq87dffrwtgc4168zo968lc142hfu5cx87bjnj38pt1hws
00:26:22.557   23:59:53	-- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62
00:26:22.557    23:59:53	-- dd/basic_rw.sh@59 -- # gen_conf
00:26:22.557    23:59:53	-- dd/common.sh@31 -- # xtrace_disable
00:26:22.557    23:59:53	-- common/autotest_common.sh@10 -- # set +x
00:26:22.557  {
00:26:22.557    "subsystems": [
00:26:22.557      {
00:26:22.557        "subsystem": "bdev",
00:26:22.557        "config": [
00:26:22.557          {
00:26:22.557            "params": {
00:26:22.557              "trtype": "pcie",
00:26:22.557              "traddr": "0000:00:06.0",
00:26:22.557              "name": "Nvme0"
00:26:22.557            },
00:26:22.557            "method": "bdev_nvme_attach_controller"
00:26:22.557          },
00:26:22.557          {
00:26:22.557            "method": "bdev_wait_for_examine"
00:26:22.557          }
00:26:22.557        ]
00:26:22.557      }
00:26:22.557    ]
00:26:22.557  }
00:26:22.557  [2024-12-13 23:59:53.203760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:22.557  [2024-12-13 23:59:53.203959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133877 ]
00:26:22.817  [2024-12-13 23:59:53.373561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:22.817  [2024-12-13 23:59:53.546628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:23.386  
[2024-12-13T23:59:55.056Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:26:24.324  
00:26:24.324   23:59:54	-- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62
00:26:24.324    23:59:54	-- dd/basic_rw.sh@65 -- # gen_conf
00:26:24.324    23:59:54	-- dd/common.sh@31 -- # xtrace_disable
00:26:24.324    23:59:54	-- common/autotest_common.sh@10 -- # set +x
00:26:24.324  [2024-12-13 23:59:54.800894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:24.324  [2024-12-13 23:59:54.801085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133908 ]
00:26:24.324  {
00:26:24.324    "subsystems": [
00:26:24.324      {
00:26:24.324        "subsystem": "bdev",
00:26:24.324        "config": [
00:26:24.324          {
00:26:24.324            "params": {
00:26:24.324              "trtype": "pcie",
00:26:24.324              "traddr": "0000:00:06.0",
00:26:24.324              "name": "Nvme0"
00:26:24.324            },
00:26:24.324            "method": "bdev_nvme_attach_controller"
00:26:24.324          },
00:26:24.324          {
00:26:24.324            "method": "bdev_wait_for_examine"
00:26:24.324          }
00:26:24.324        ]
00:26:24.324      }
00:26:24.324    ]
00:26:24.324  }
00:26:24.324  [2024-12-13 23:59:54.954772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:24.582  [2024-12-13 23:59:55.132944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:24.840  
[2024-12-13T23:59:56.512Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:26:25.780  
00:26:25.780   23:59:56	-- dd/basic_rw.sh@71 -- # read -rn4096 data_check
00:26:25.780  ************************************
00:26:25.780  END TEST dd_rw_offset
00:26:25.780  ************************************
00:26:25.781   23:59:56	-- dd/basic_rw.sh@72 -- # [[ mon1ruv7g4pwryxzy51cd1ylixwyjo89l2xqms43wxmlamsle49nj1zl6ykocv4okmltm61e9y2le0wuokkaqzpld0iinj3digjame6ugukwq66ngtlxzh5ipg4f7turikx2eajfkavxcpcga0iyg5ksm732fqg1x8to56qm10aimtr2nfd619yiyxrs2pej0til3tfv4ehgqpktuyaty12luw988t6dry88t4m2cwy5thhdyc4ea7q6d95rz49fi2s5axq3iez6kfa3wch7r04p6p4j0vfi27jcz0biqwk7v57wuv3uzvjwj5qmv2l5luwqcf9k595pu531tu3s8vu4g5x1gvsaf7qubbyyvu38pi4xtwlghpy3kpfes0w0ycn3ytm6jly5y7v9h4izyr4k8sjs85n4n3xlxf6nedipuo801vig07r6wy6wjc17b6d2spsm3xfli6pqr5ywlgartexdzijyi2be8ct1wytttlfekks6unn0ejaktng0l8ndftdz8xxwpapw1neqt05pzx85wd7fubh8w5csklhtpyjbkw9ocujvpuiwcsegnhxrlxczr1zosyseogrninkk600b768qhfaigl0buvheih9qdfrdlbb2n0nlj97lw58mkpoz0f75o9n8ul2knbdmtp3ptw9wx0eoozq5xv34xw6q5i533629trz1z6uok9vwkjrqvcynzm90k3mnoh18294jkklcbug3qpcm7373yxc2gexjqaj4lxqas5tf7ahdz86lgksanae2ce74mupoazgsl2b6kttp9dlw1oxlcd1ou6meur214b61kx5aoczdmj0lgh8nxob2y84zsihfbo08rc7nhmq8a5uluqg3dgbw9k0ojkb0ojml0m7ykclb00xbk4zhudtdo3zbvyh8slhxi7d6imxopmoszu7rs3ntah74287ryo57g1xx7h04vivi8f8s3u3bbz3b4fjq0hzbp2ggc21qc2i8in7tjalg70b7pavpvpuxecvn428vuty0cqe28pod0otp9xz4t3tgs8zqgu9e9mmu600csgh66mc9tcl4ahx5eys6o2pp2a0xbaphmaucau8zu1iup6lsxoiymd7w31ypw45cori9q4dlbwfe8gyerbyg4d8kf6vcqtgcfc95314o26wr6m4almvhfn48c8n6kw0vzagxm300mw433ne64zomu2x2p2rqip6mnpk4xh9a6amkc3efwwxei4pxfog3b2k2owp0rdhunbuygn48d5zgy5l5t0yzp0jwxqqmhw1bu6tuagcurh4trakxcgms3qmypfaax53tquncv0xslwa9y8xzm2ez23vhxt6ft0gupc8jtkk2rlby1boybr7ew1jr9ejvquy4g5bu4xz4s9vhab8gppo6l9uqvkdc6apcgb1wda1wj26at9v9a2chijavip9rtg4j0ohkug8zgwxc5ui2regvrhfttafxi5h5ivrybltb03v42rno1peagchvp8n32l1m9s5068mnr00uwyhyixdl54nqdcxcc3gc0n70dxhdiieuxk85f3iocgkqclvswdjxwbfk1m96jjehd1yom25x249izn0sp4065ot9i7wbm2ejxg7cvwm988fh4qb5a3ahc3l2ri8sddn6hf8tw57jjrv95jbotofphygmwaob4kn1aotbpch0c17ga10qt5xam5byeupj6sw958iziqo3sctbvl3v82fy15ighafiv1q1ehmhvza4of5uqyfc3lna0or4rsn4h45e4cpv0vsozj32ioekedck18u2n284avr3065r8qaiyt0qferhw0vpu7fa34h5q72wnjzi5mp5ipq4uwn6645flh7cgh13je3cilzagokf9pb49nzd9xb9490jyk2uniyceqxlluaj0ji1yuwi9lblx75fx5tkylx4wj617w4ypngar900jfxpoi1etfniz0cfh9hxckne4ndj76gzp09m0o4sn3a7i2oyx0m2t8qxn0l17kd1romaqz80x3fddj3y6zfqxvxc3iji2x676d7si7mxqfk3ibeqkstwmjzs7clmr1keb8efm0d4eb5dvwzfpx57f25vfyjkztfxkbkv3z2nv9a5vpdvxsvxw04kvqmqg2ygyced2ou99lp00ftcve2pbt4v15bxet6bhrmiyqmwosad7kh5gpqd2knjfboq30d9fyp2jz9jmqilq197peyrma6mpd4sm7igg004r6xen2o3c5r4y3o4njgdk8lubek4gcf1259gdjbaad392dug4h6vsyz7xkotf34uu3gx3td3c27sv4s5fojc103qy889az1gkuq4myx5l2yqnuo63cj8xw0i9abpp8wezt2sispuqa9snphc95lsq2v8lrt2m48wsxuvvdb4wai7m7c7au82cn4wkqdco6xybi61y3xn0ip7fmmgm7599o73uh4wbnt00mj4vlaxu07ufziw3cbn3608v1rzxg3c45r0bkp9k8re9f4ic9vx6z88ky5e1eldnjwfscsol7e1cwazr1a56bbgd6hv4rxpgw9j67c4qutvy32cawc7vmxbg0a6vtk3nmk46q4uuqd020t431q4o7gxl4vnoouu5iqtve8zml3gkk1vdobx9v1mk0owy21uqfj32zyeqijjvh3cqdjg8dsh4j5nj6ro2jttqpwgr8qvljkku0x4xbb5jlicgjcq5d6fo77c5srt6budhqz3j5oh1x38h6z2cd42zflvhb0jpjuox6igeu70uj6evi3o5hb0ez3nrh8xxi0wzwcdvwslsx606w8qhki0c5m6v8mgeyi0wexgas87a5p1uwh1uje9npr3mxsxlxi8n9ldwhsqcwluo1vsvrmrmpelvtpua5cseyt8iqo07a3d4ghm4wn9yvlg4v9u5x1yx2nfapw4ead2qkvhyhm7eceg6sidkpgu3dgcfp46obwolbt56ytpqdas6xsudg8150i3rdev3mpwci4etk6lnkcqdm16vt923e35aoymrw5gl80ke4lw7nctjnqljqezzalqltmp2vtpqm1d12fm2jvicryyugmzwagexpdyyi8eceh24mmyj630dz9mkjgf04xcss80s48vqow4j9tom6l6z26fr9tgbytgiefbzbzesu41kublqkn698tcdpt41z9pcmv12veuzef54pdf8lujdrplpb5hpwk55zx6cdkgh8fcqkdbog59cjujryti8p4vs1bk72nvzhxnsynyv80gnl2daz4fdafh62l12xsf2xrc9o91u9se2pea39n5ktniy6emhlkifj04njbfadgl1xm19gwpxuzi3og9iok775k9lwnykqr7rh42digo3nvrekcrnzfwo93smp3kz1g5ubf9uigi4n9mcv3okfgh8zuazrkxdsmiq0z4kphah8fzbnlenmx2nv0d09eyul34xpahkikmb4rji3xle61qsyk0a4dwemazkz47izqzs6vher6x5ur78tpyrddkefk08fcl2ax4oc0ng5lz5r8zotcorsskr6fklvnzb3w9j4z2963gtpdamtnchw0zbodf4lojsn81xhbmkhtvttr0atmw89m8ap8fzau3qw0w42fnvawesgpsbxq24tebtdzt3ktvdbot7d6vdket2njx97xj3ehhzc7daraovtbjxux8m14b3cdmjeelo1ajcfknb4xv343tw0aaub1h18dlw60g37kkj78mc5lk5lmgas76etht75ez5a3wnh51w7um480xaq8tsai9tr561wppl8391n4d9oay4vkwp2nxroq4fj9j7m8qwupzq52nof7ge4flx5zkt9sl9o1guuz4kwlq05aqb3x37pkgoq5pc35r9hvvxg7ak0a6n7m5o9z9cnw6ld5y1l0x9v0tdp82ruec03c1301hswd5e9w7drdqyjsirg3vaauqy96bvmbkqbmma7uvbqg1uq5ymt0ncqmqi0uq5xeabl0n4jq8hzpvdytzokeab7cq87dffrwtgc4168zo968lc142hfu5cx87bjnj38pt1hws == \m\o\n\1\r\u\v\7\g\4\p\w\r\y\x\z\y\5\1\c\d\1\y\l\i\x\w\y\j\o\8\9\l\2\x\q\m\s\4\3\w\x\m\l\a\m\s\l\e\4\9\n\j\1\z\l\6\y\k\o\c\v\4\o\k\m\l\t\m\6\1\e\9\y\2\l\e\0\w\u\o\k\k\a\q\z\p\l\d\0\i\i\n\j\3\d\i\g\j\a\m\e\6\u\g\u\k\w\q\6\6\n\g\t\l\x\z\h\5\i\p\g\4\f\7\t\u\r\i\k\x\2\e\a\j\f\k\a\v\x\c\p\c\g\a\0\i\y\g\5\k\s\m\7\3\2\f\q\g\1\x\8\t\o\5\6\q\m\1\0\a\i\m\t\r\2\n\f\d\6\1\9\y\i\y\x\r\s\2\p\e\j\0\t\i\l\3\t\f\v\4\e\h\g\q\p\k\t\u\y\a\t\y\1\2\l\u\w\9\8\8\t\6\d\r\y\8\8\t\4\m\2\c\w\y\5\t\h\h\d\y\c\4\e\a\7\q\6\d\9\5\r\z\4\9\f\i\2\s\5\a\x\q\3\i\e\z\6\k\f\a\3\w\c\h\7\r\0\4\p\6\p\4\j\0\v\f\i\2\7\j\c\z\0\b\i\q\w\k\7\v\5\7\w\u\v\3\u\z\v\j\w\j\5\q\m\v\2\l\5\l\u\w\q\c\f\9\k\5\9\5\p\u\5\3\1\t\u\3\s\8\v\u\4\g\5\x\1\g\v\s\a\f\7\q\u\b\b\y\y\v\u\3\8\p\i\4\x\t\w\l\g\h\p\y\3\k\p\f\e\s\0\w\0\y\c\n\3\y\t\m\6\j\l\y\5\y\7\v\9\h\4\i\z\y\r\4\k\8\s\j\s\8\5\n\4\n\3\x\l\x\f\6\n\e\d\i\p\u\o\8\0\1\v\i\g\0\7\r\6\w\y\6\w\j\c\1\7\b\6\d\2\s\p\s\m\3\x\f\l\i\6\p\q\r\5\y\w\l\g\a\r\t\e\x\d\z\i\j\y\i\2\b\e\8\c\t\1\w\y\t\t\t\l\f\e\k\k\s\6\u\n\n\0\e\j\a\k\t\n\g\0\l\8\n\d\f\t\d\z\8\x\x\w\p\a\p\w\1\n\e\q\t\0\5\p\z\x\8\5\w\d\7\f\u\b\h\8\w\5\c\s\k\l\h\t\p\y\j\b\k\w\9\o\c\u\j\v\p\u\i\w\c\s\e\g\n\h\x\r\l\x\c\z\r\1\z\o\s\y\s\e\o\g\r\n\i\n\k\k\6\0\0\b\7\6\8\q\h\f\a\i\g\l\0\b\u\v\h\e\i\h\9\q\d\f\r\d\l\b\b\2\n\0\n\l\j\9\7\l\w\5\8\m\k\p\o\z\0\f\7\5\o\9\n\8\u\l\2\k\n\b\d\m\t\p\3\p\t\w\9\w\x\0\e\o\o\z\q\5\x\v\3\4\x\w\6\q\5\i\5\3\3\6\2\9\t\r\z\1\z\6\u\o\k\9\v\w\k\j\r\q\v\c\y\n\z\m\9\0\k\3\m\n\o\h\1\8\2\9\4\j\k\k\l\c\b\u\g\3\q\p\c\m\7\3\7\3\y\x\c\2\g\e\x\j\q\a\j\4\l\x\q\a\s\5\t\f\7\a\h\d\z\8\6\l\g\k\s\a\n\a\e\2\c\e\7\4\m\u\p\o\a\z\g\s\l\2\b\6\k\t\t\p\9\d\l\w\1\o\x\l\c\d\1\o\u\6\m\e\u\r\2\1\4\b\6\1\k\x\5\a\o\c\z\d\m\j\0\l\g\h\8\n\x\o\b\2\y\8\4\z\s\i\h\f\b\o\0\8\r\c\7\n\h\m\q\8\a\5\u\l\u\q\g\3\d\g\b\w\9\k\0\o\j\k\b\0\o\j\m\l\0\m\7\y\k\c\l\b\0\0\x\b\k\4\z\h\u\d\t\d\o\3\z\b\v\y\h\8\s\l\h\x\i\7\d\6\i\m\x\o\p\m\o\s\z\u\7\r\s\3\n\t\a\h\7\4\2\8\7\r\y\o\5\7\g\1\x\x\7\h\0\4\v\i\v\i\8\f\8\s\3\u\3\b\b\z\3\b\4\f\j\q\0\h\z\b\p\2\g\g\c\2\1\q\c\2\i\8\i\n\7\t\j\a\l\g\7\0\b\7\p\a\v\p\v\p\u\x\e\c\v\n\4\2\8\v\u\t\y\0\c\q\e\2\8\p\o\d\0\o\t\p\9\x\z\4\t\3\t\g\s\8\z\q\g\u\9\e\9\m\m\u\6\0\0\c\s\g\h\6\6\m\c\9\t\c\l\4\a\h\x\5\e\y\s\6\o\2\p\p\2\a\0\x\b\a\p\h\m\a\u\c\a\u\8\z\u\1\i\u\p\6\l\s\x\o\i\y\m\d\7\w\3\1\y\p\w\4\5\c\o\r\i\9\q\4\d\l\b\w\f\e\8\g\y\e\r\b\y\g\4\d\8\k\f\6\v\c\q\t\g\c\f\c\9\5\3\1\4\o\2\6\w\r\6\m\4\a\l\m\v\h\f\n\4\8\c\8\n\6\k\w\0\v\z\a\g\x\m\3\0\0\m\w\4\3\3\n\e\6\4\z\o\m\u\2\x\2\p\2\r\q\i\p\6\m\n\p\k\4\x\h\9\a\6\a\m\k\c\3\e\f\w\w\x\e\i\4\p\x\f\o\g\3\b\2\k\2\o\w\p\0\r\d\h\u\n\b\u\y\g\n\4\8\d\5\z\g\y\5\l\5\t\0\y\z\p\0\j\w\x\q\q\m\h\w\1\b\u\6\t\u\a\g\c\u\r\h\4\t\r\a\k\x\c\g\m\s\3\q\m\y\p\f\a\a\x\5\3\t\q\u\n\c\v\0\x\s\l\w\a\9\y\8\x\z\m\2\e\z\2\3\v\h\x\t\6\f\t\0\g\u\p\c\8\j\t\k\k\2\r\l\b\y\1\b\o\y\b\r\7\e\w\1\j\r\9\e\j\v\q\u\y\4\g\5\b\u\4\x\z\4\s\9\v\h\a\b\8\g\p\p\o\6\l\9\u\q\v\k\d\c\6\a\p\c\g\b\1\w\d\a\1\w\j\2\6\a\t\9\v\9\a\2\c\h\i\j\a\v\i\p\9\r\t\g\4\j\0\o\h\k\u\g\8\z\g\w\x\c\5\u\i\2\r\e\g\v\r\h\f\t\t\a\f\x\i\5\h\5\i\v\r\y\b\l\t\b\0\3\v\4\2\r\n\o\1\p\e\a\g\c\h\v\p\8\n\3\2\l\1\m\9\s\5\0\6\8\m\n\r\0\0\u\w\y\h\y\i\x\d\l\5\4\n\q\d\c\x\c\c\3\g\c\0\n\7\0\d\x\h\d\i\i\e\u\x\k\8\5\f\3\i\o\c\g\k\q\c\l\v\s\w\d\j\x\w\b\f\k\1\m\9\6\j\j\e\h\d\1\y\o\m\2\5\x\2\4\9\i\z\n\0\s\p\4\0\6\5\o\t\9\i\7\w\b\m\2\e\j\x\g\7\c\v\w\m\9\8\8\f\h\4\q\b\5\a\3\a\h\c\3\l\2\r\i\8\s\d\d\n\6\h\f\8\t\w\5\7\j\j\r\v\9\5\j\b\o\t\o\f\p\h\y\g\m\w\a\o\b\4\k\n\1\a\o\t\b\p\c\h\0\c\1\7\g\a\1\0\q\t\5\x\a\m\5\b\y\e\u\p\j\6\s\w\9\5\8\i\z\i\q\o\3\s\c\t\b\v\l\3\v\8\2\f\y\1\5\i\g\h\a\f\i\v\1\q\1\e\h\m\h\v\z\a\4\o\f\5\u\q\y\f\c\3\l\n\a\0\o\r\4\r\s\n\4\h\4\5\e\4\c\p\v\0\v\s\o\z\j\3\2\i\o\e\k\e\d\c\k\1\8\u\2\n\2\8\4\a\v\r\3\0\6\5\r\8\q\a\i\y\t\0\q\f\e\r\h\w\0\v\p\u\7\f\a\3\4\h\5\q\7\2\w\n\j\z\i\5\m\p\5\i\p\q\4\u\w\n\6\6\4\5\f\l\h\7\c\g\h\1\3\j\e\3\c\i\l\z\a\g\o\k\f\9\p\b\4\9\n\z\d\9\x\b\9\4\9\0\j\y\k\2\u\n\i\y\c\e\q\x\l\l\u\a\j\0\j\i\1\y\u\w\i\9\l\b\l\x\7\5\f\x\5\t\k\y\l\x\4\w\j\6\1\7\w\4\y\p\n\g\a\r\9\0\0\j\f\x\p\o\i\1\e\t\f\n\i\z\0\c\f\h\9\h\x\c\k\n\e\4\n\d\j\7\6\g\z\p\0\9\m\0\o\4\s\n\3\a\7\i\2\o\y\x\0\m\2\t\8\q\x\n\0\l\1\7\k\d\1\r\o\m\a\q\z\8\0\x\3\f\d\d\j\3\y\6\z\f\q\x\v\x\c\3\i\j\i\2\x\6\7\6\d\7\s\i\7\m\x\q\f\k\3\i\b\e\q\k\s\t\w\m\j\z\s\7\c\l\m\r\1\k\e\b\8\e\f\m\0\d\4\e\b\5\d\v\w\z\f\p\x\5\7\f\2\5\v\f\y\j\k\z\t\f\x\k\b\k\v\3\z\2\n\v\9\a\5\v\p\d\v\x\s\v\x\w\0\4\k\v\q\m\q\g\2\y\g\y\c\e\d\2\o\u\9\9\l\p\0\0\f\t\c\v\e\2\p\b\t\4\v\1\5\b\x\e\t\6\b\h\r\m\i\y\q\m\w\o\s\a\d\7\k\h\5\g\p\q\d\2\k\n\j\f\b\o\q\3\0\d\9\f\y\p\2\j\z\9\j\m\q\i\l\q\1\9\7\p\e\y\r\m\a\6\m\p\d\4\s\m\7\i\g\g\0\0\4\r\6\x\e\n\2\o\3\c\5\r\4\y\3\o\4\n\j\g\d\k\8\l\u\b\e\k\4\g\c\f\1\2\5\9\g\d\j\b\a\a\d\3\9\2\d\u\g\4\h\6\v\s\y\z\7\x\k\o\t\f\3\4\u\u\3\g\x\3\t\d\3\c\2\7\s\v\4\s\5\f\o\j\c\1\0\3\q\y\8\8\9\a\z\1\g\k\u\q\4\m\y\x\5\l\2\y\q\n\u\o\6\3\c\j\8\x\w\0\i\9\a\b\p\p\8\w\e\z\t\2\s\i\s\p\u\q\a\9\s\n\p\h\c\9\5\l\s\q\2\v\8\l\r\t\2\m\4\8\w\s\x\u\v\v\d\b\4\w\a\i\7\m\7\c\7\a\u\8\2\c\n\4\w\k\q\d\c\o\6\x\y\b\i\6\1\y\3\x\n\0\i\p\7\f\m\m\g\m\7\5\9\9\o\7\3\u\h\4\w\b\n\t\0\0\m\j\4\v\l\a\x\u\0\7\u\f\z\i\w\3\c\b\n\3\6\0\8\v\1\r\z\x\g\3\c\4\5\r\0\b\k\p\9\k\8\r\e\9\f\4\i\c\9\v\x\6\z\8\8\k\y\5\e\1\e\l\d\n\j\w\f\s\c\s\o\l\7\e\1\c\w\a\z\r\1\a\5\6\b\b\g\d\6\h\v\4\r\x\p\g\w\9\j\6\7\c\4\q\u\t\v\y\3\2\c\a\w\c\7\v\m\x\b\g\0\a\6\v\t\k\3\n\m\k\4\6\q\4\u\u\q\d\0\2\0\t\4\3\1\q\4\o\7\g\x\l\4\v\n\o\o\u\u\5\i\q\t\v\e\8\z\m\l\3\g\k\k\1\v\d\o\b\x\9\v\1\m\k\0\o\w\y\2\1\u\q\f\j\3\2\z\y\e\q\i\j\j\v\h\3\c\q\d\j\g\8\d\s\h\4\j\5\n\j\6\r\o\2\j\t\t\q\p\w\g\r\8\q\v\l\j\k\k\u\0\x\4\x\b\b\5\j\l\i\c\g\j\c\q\5\d\6\f\o\7\7\c\5\s\r\t\6\b\u\d\h\q\z\3\j\5\o\h\1\x\3\8\h\6\z\2\c\d\4\2\z\f\l\v\h\b\0\j\p\j\u\o\x\6\i\g\e\u\7\0\u\j\6\e\v\i\3\o\5\h\b\0\e\z\3\n\r\h\8\x\x\i\0\w\z\w\c\d\v\w\s\l\s\x\6\0\6\w\8\q\h\k\i\0\c\5\m\6\v\8\m\g\e\y\i\0\w\e\x\g\a\s\8\7\a\5\p\1\u\w\h\1\u\j\e\9\n\p\r\3\m\x\s\x\l\x\i\8\n\9\l\d\w\h\s\q\c\w\l\u\o\1\v\s\v\r\m\r\m\p\e\l\v\t\p\u\a\5\c\s\e\y\t\8\i\q\o\0\7\a\3\d\4\g\h\m\4\w\n\9\y\v\l\g\4\v\9\u\5\x\1\y\x\2\n\f\a\p\w\4\e\a\d\2\q\k\v\h\y\h\m\7\e\c\e\g\6\s\i\d\k\p\g\u\3\d\g\c\f\p\4\6\o\b\w\o\l\b\t\5\6\y\t\p\q\d\a\s\6\x\s\u\d\g\8\1\5\0\i\3\r\d\e\v\3\m\p\w\c\i\4\e\t\k\6\l\n\k\c\q\d\m\1\6\v\t\9\2\3\e\3\5\a\o\y\m\r\w\5\g\l\8\0\k\e\4\l\w\7\n\c\t\j\n\q\l\j\q\e\z\z\a\l\q\l\t\m\p\2\v\t\p\q\m\1\d\1\2\f\m\2\j\v\i\c\r\y\y\u\g\m\z\w\a\g\e\x\p\d\y\y\i\8\e\c\e\h\2\4\m\m\y\j\6\3\0\d\z\9\m\k\j\g\f\0\4\x\c\s\s\8\0\s\4\8\v\q\o\w\4\j\9\t\o\m\6\l\6\z\2\6\f\r\9\t\g\b\y\t\g\i\e\f\b\z\b\z\e\s\u\4\1\k\u\b\l\q\k\n\6\9\8\t\c\d\p\t\4\1\z\9\p\c\m\v\1\2\v\e\u\z\e\f\5\4\p\d\f\8\l\u\j\d\r\p\l\p\b\5\h\p\w\k\5\5\z\x\6\c\d\k\g\h\8\f\c\q\k\d\b\o\g\5\9\c\j\u\j\r\y\t\i\8\p\4\v\s\1\b\k\7\2\n\v\z\h\x\n\s\y\n\y\v\8\0\g\n\l\2\d\a\z\4\f\d\a\f\h\6\2\l\1\2\x\s\f\2\x\r\c\9\o\9\1\u\9\s\e\2\p\e\a\3\9\n\5\k\t\n\i\y\6\e\m\h\l\k\i\f\j\0\4\n\j\b\f\a\d\g\l\1\x\m\1\9\g\w\p\x\u\z\i\3\o\g\9\i\o\k\7\7\5\k\9\l\w\n\y\k\q\r\7\r\h\4\2\d\i\g\o\3\n\v\r\e\k\c\r\n\z\f\w\o\9\3\s\m\p\3\k\z\1\g\5\u\b\f\9\u\i\g\i\4\n\9\m\c\v\3\o\k\f\g\h\8\z\u\a\z\r\k\x\d\s\m\i\q\0\z\4\k\p\h\a\h\8\f\z\b\n\l\e\n\m\x\2\n\v\0\d\0\9\e\y\u\l\3\4\x\p\a\h\k\i\k\m\b\4\r\j\i\3\x\l\e\6\1\q\s\y\k\0\a\4\d\w\e\m\a\z\k\z\4\7\i\z\q\z\s\6\v\h\e\r\6\x\5\u\r\7\8\t\p\y\r\d\d\k\e\f\k\0\8\f\c\l\2\a\x\4\o\c\0\n\g\5\l\z\5\r\8\z\o\t\c\o\r\s\s\k\r\6\f\k\l\v\n\z\b\3\w\9\j\4\z\2\9\6\3\g\t\p\d\a\m\t\n\c\h\w\0\z\b\o\d\f\4\l\o\j\s\n\8\1\x\h\b\m\k\h\t\v\t\t\r\0\a\t\m\w\8\9\m\8\a\p\8\f\z\a\u\3\q\w\0\w\4\2\f\n\v\a\w\e\s\g\p\s\b\x\q\2\4\t\e\b\t\d\z\t\3\k\t\v\d\b\o\t\7\d\6\v\d\k\e\t\2\n\j\x\9\7\x\j\3\e\h\h\z\c\7\d\a\r\a\o\v\t\b\j\x\u\x\8\m\1\4\b\3\c\d\m\j\e\e\l\o\1\a\j\c\f\k\n\b\4\x\v\3\4\3\t\w\0\a\a\u\b\1\h\1\8\d\l\w\6\0\g\3\7\k\k\j\7\8\m\c\5\l\k\5\l\m\g\a\s\7\6\e\t\h\t\7\5\e\z\5\a\3\w\n\h\5\1\w\7\u\m\4\8\0\x\a\q\8\t\s\a\i\9\t\r\5\6\1\w\p\p\l\8\3\9\1\n\4\d\9\o\a\y\4\v\k\w\p\2\n\x\r\o\q\4\f\j\9\j\7\m\8\q\w\u\p\z\q\5\2\n\o\f\7\g\e\4\f\l\x\5\z\k\t\9\s\l\9\o\1\g\u\u\z\4\k\w\l\q\0\5\a\q\b\3\x\3\7\p\k\g\o\q\5\p\c\3\5\r\9\h\v\v\x\g\7\a\k\0\a\6\n\7\m\5\o\9\z\9\c\n\w\6\l\d\5\y\1\l\0\x\9\v\0\t\d\p\8\2\r\u\e\c\0\3\c\1\3\0\1\h\s\w\d\5\e\9\w\7\d\r\d\q\y\j\s\i\r\g\3\v\a\a\u\q\y\9\6\b\v\m\b\k\q\b\m\m\a\7\u\v\b\q\g\1\u\q\5\y\m\t\0\n\c\q\m\q\i\0\u\q\5\x\e\a\b\l\0\n\4\j\q\8\h\z\p\v\d\y\t\z\o\k\e\a\b\7\c\q\8\7\d\f\f\r\w\t\g\c\4\1\6\8\z\o\9\6\8\l\c\1\4\2\h\f\u\5\c\x\8\7\b\j\n\j\3\8\p\t\1\h\w\s ]]
00:26:25.781  
00:26:25.781  real	0m3.337s
00:26:25.781  user	0m2.689s
00:26:25.781  sys	0m0.513s
00:26:25.781   23:59:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:25.781   23:59:56	-- common/autotest_common.sh@10 -- # set +x
00:26:25.781   23:59:56	-- dd/basic_rw.sh@1 -- # cleanup
00:26:25.781   23:59:56	-- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1
00:26:25.781   23:59:56	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:25.781   23:59:56	-- dd/common.sh@11 -- # local nvme_ref=
00:26:25.781   23:59:56	-- dd/common.sh@12 -- # local size=0xffff
00:26:25.781   23:59:56	-- dd/common.sh@14 -- # local bs=1048576
00:26:25.781   23:59:56	-- dd/common.sh@15 -- # local count=1
00:26:25.781   23:59:56	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:26:25.781    23:59:56	-- dd/common.sh@18 -- # gen_conf
00:26:25.781    23:59:56	-- dd/common.sh@31 -- # xtrace_disable
00:26:25.781    23:59:56	-- common/autotest_common.sh@10 -- # set +x
00:26:26.040  [2024-12-13 23:59:56.519323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:26.040  [2024-12-13 23:59:56.520050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133957 ]
00:26:26.040  {
00:26:26.040    "subsystems": [
00:26:26.040      {
00:26:26.040        "subsystem": "bdev",
00:26:26.040        "config": [
00:26:26.040          {
00:26:26.040            "params": {
00:26:26.040              "trtype": "pcie",
00:26:26.040              "traddr": "0000:00:06.0",
00:26:26.040              "name": "Nvme0"
00:26:26.040            },
00:26:26.040            "method": "bdev_nvme_attach_controller"
00:26:26.040          },
00:26:26.040          {
00:26:26.040            "method": "bdev_wait_for_examine"
00:26:26.040          }
00:26:26.040        ]
00:26:26.040      }
00:26:26.040    ]
00:26:26.040  }
00:26:26.040  [2024-12-13 23:59:56.673451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:26.300  [2024-12-13 23:59:56.836854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:26.561  
[2024-12-13T23:59:58.268Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:26:27.536  
00:26:27.536   23:59:58	-- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:27.536  
00:26:27.536  real	0m41.154s
00:26:27.536  user	0m33.448s
00:26:27.536  sys	0m6.045s
00:26:27.536   23:59:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:27.536   23:59:58	-- common/autotest_common.sh@10 -- # set +x
00:26:27.536  ************************************
00:26:27.536  END TEST spdk_dd_basic_rw
00:26:27.536  ************************************
00:26:27.536   23:59:58	-- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:26:27.536   23:59:58	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:27.536   23:59:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:27.536   23:59:58	-- common/autotest_common.sh@10 -- # set +x
00:26:27.536  ************************************
00:26:27.536  START TEST spdk_dd_posix
00:26:27.536  ************************************
00:26:27.536   23:59:58	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:26:27.536  * Looking for test storage...
00:26:27.536  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:26:27.796     23:59:58	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:27.796      23:59:58	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:27.796      23:59:58	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:27.796     23:59:58	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:27.796     23:59:58	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:27.796     23:59:58	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:27.796     23:59:58	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:27.796     23:59:58	-- scripts/common.sh@335 -- # IFS=.-:
00:26:27.796     23:59:58	-- scripts/common.sh@335 -- # read -ra ver1
00:26:27.796     23:59:58	-- scripts/common.sh@336 -- # IFS=.-:
00:26:27.796     23:59:58	-- scripts/common.sh@336 -- # read -ra ver2
00:26:27.796     23:59:58	-- scripts/common.sh@337 -- # local 'op=<'
00:26:27.796     23:59:58	-- scripts/common.sh@339 -- # ver1_l=2
00:26:27.796     23:59:58	-- scripts/common.sh@340 -- # ver2_l=1
00:26:27.796     23:59:58	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:27.796     23:59:58	-- scripts/common.sh@343 -- # case "$op" in
00:26:27.796     23:59:58	-- scripts/common.sh@344 -- # : 1
00:26:27.796     23:59:58	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:27.796     23:59:58	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:27.796      23:59:58	-- scripts/common.sh@364 -- # decimal 1
00:26:27.796      23:59:58	-- scripts/common.sh@352 -- # local d=1
00:26:27.796      23:59:58	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:27.796      23:59:58	-- scripts/common.sh@354 -- # echo 1
00:26:27.796     23:59:58	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:27.796      23:59:58	-- scripts/common.sh@365 -- # decimal 2
00:26:27.796      23:59:58	-- scripts/common.sh@352 -- # local d=2
00:26:27.796      23:59:58	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:27.796      23:59:58	-- scripts/common.sh@354 -- # echo 2
00:26:27.796     23:59:58	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:27.796     23:59:58	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:27.796     23:59:58	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:27.796     23:59:58	-- scripts/common.sh@367 -- # return 0
00:26:27.796     23:59:58	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:27.796     23:59:58	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:27.796  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:27.796  		--rc genhtml_branch_coverage=1
00:26:27.796  		--rc genhtml_function_coverage=1
00:26:27.796  		--rc genhtml_legend=1
00:26:27.796  		--rc geninfo_all_blocks=1
00:26:27.796  		--rc geninfo_unexecuted_blocks=1
00:26:27.796  		
00:26:27.796  		'
00:26:27.796     23:59:58	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:27.796  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:27.796  		--rc genhtml_branch_coverage=1
00:26:27.796  		--rc genhtml_function_coverage=1
00:26:27.796  		--rc genhtml_legend=1
00:26:27.796  		--rc geninfo_all_blocks=1
00:26:27.796  		--rc geninfo_unexecuted_blocks=1
00:26:27.796  		
00:26:27.796  		'
00:26:27.797     23:59:58	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:27.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:27.797  		--rc genhtml_branch_coverage=1
00:26:27.797  		--rc genhtml_function_coverage=1
00:26:27.797  		--rc genhtml_legend=1
00:26:27.797  		--rc geninfo_all_blocks=1
00:26:27.797  		--rc geninfo_unexecuted_blocks=1
00:26:27.797  		
00:26:27.797  		'
00:26:27.797     23:59:58	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:27.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:27.797  		--rc genhtml_branch_coverage=1
00:26:27.797  		--rc genhtml_function_coverage=1
00:26:27.797  		--rc genhtml_legend=1
00:26:27.797  		--rc geninfo_all_blocks=1
00:26:27.797  		--rc geninfo_unexecuted_blocks=1
00:26:27.797  		
00:26:27.797  		'
00:26:27.797    23:59:58	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:27.797     23:59:58	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:27.797     23:59:58	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:27.797     23:59:58	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:27.797      23:59:58	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:27.797      23:59:58	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:27.797      23:59:58	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:27.797      23:59:58	-- paths/export.sh@5 -- # export PATH
00:26:27.797      23:59:58	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:27.797   23:59:58	-- dd/posix.sh@121 -- # msg[0]=', using AIO'
00:26:27.797   23:59:58	-- dd/posix.sh@122 -- # msg[1]=', liburing in use'
00:26:27.797   23:59:58	-- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO'
00:26:27.797   23:59:58	-- dd/posix.sh@125 -- # trap cleanup EXIT
00:26:27.797   23:59:58	-- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:27.797   23:59:58	-- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:27.797   23:59:58	-- dd/posix.sh@130 -- # tests
00:26:27.797   23:59:58	-- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO'
00:26:27.797  * First test run, using AIO
00:26:27.797   23:59:58	-- dd/posix.sh@102 -- # run_test dd_flag_append append
00:26:27.797   23:59:58	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:27.797   23:59:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:27.797   23:59:58	-- common/autotest_common.sh@10 -- # set +x
00:26:27.797  ************************************
00:26:27.797  START TEST dd_flag_append
00:26:27.797  ************************************
00:26:27.797   23:59:58	-- common/autotest_common.sh@1114 -- # append
00:26:27.797   23:59:58	-- dd/posix.sh@16 -- # local dump0
00:26:27.797   23:59:58	-- dd/posix.sh@17 -- # local dump1
00:26:27.797    23:59:58	-- dd/posix.sh@19 -- # gen_bytes 32
00:26:27.797    23:59:58	-- dd/common.sh@98 -- # xtrace_disable
00:26:27.797    23:59:58	-- common/autotest_common.sh@10 -- # set +x
00:26:27.797   23:59:58	-- dd/posix.sh@19 -- # dump0=yzc7nin5yy62zl8pjuv6wff0rbzi6u9w
00:26:27.797    23:59:58	-- dd/posix.sh@20 -- # gen_bytes 32
00:26:27.797    23:59:58	-- dd/common.sh@98 -- # xtrace_disable
00:26:27.797    23:59:58	-- common/autotest_common.sh@10 -- # set +x
00:26:27.797   23:59:58	-- dd/posix.sh@20 -- # dump1=b51jv5aw4e0dq1xjfmhvb0zmzfb5ea0t
00:26:27.797   23:59:58	-- dd/posix.sh@22 -- # printf %s yzc7nin5yy62zl8pjuv6wff0rbzi6u9w
00:26:27.797   23:59:58	-- dd/posix.sh@23 -- # printf %s b51jv5aw4e0dq1xjfmhvb0zmzfb5ea0t
00:26:27.797   23:59:58	-- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:26:27.797  [2024-12-13 23:59:58.452125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:27.797  [2024-12-13 23:59:58.452327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134034 ]
00:26:28.057  [2024-12-13 23:59:58.618427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:28.317  [2024-12-13 23:59:58.798430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:28.317  
[2024-12-14T00:00:00.429Z] Copying: 32/32 [B] (average 31 kBps)
00:26:29.697  
00:26:29.697   00:00:00	-- dd/posix.sh@27 -- # [[ b51jv5aw4e0dq1xjfmhvb0zmzfb5ea0tyzc7nin5yy62zl8pjuv6wff0rbzi6u9w == \b\5\1\j\v\5\a\w\4\e\0\d\q\1\x\j\f\m\h\v\b\0\z\m\z\f\b\5\e\a\0\t\y\z\c\7\n\i\n\5\y\y\6\2\z\l\8\p\j\u\v\6\w\f\f\0\r\b\z\i\6\u\9\w ]]
00:26:29.697  
00:26:29.697  real	0m1.636s
00:26:29.697  user	0m1.239s
00:26:29.697  sys	0m0.260s
00:26:29.697   00:00:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:29.697   00:00:00	-- common/autotest_common.sh@10 -- # set +x
00:26:29.697  ************************************
00:26:29.697  END TEST dd_flag_append
00:26:29.697  ************************************
00:26:29.697   00:00:00	-- dd/posix.sh@103 -- # run_test dd_flag_directory directory
00:26:29.697   00:00:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:29.697   00:00:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:29.697   00:00:00	-- common/autotest_common.sh@10 -- # set +x
00:26:29.697  ************************************
00:26:29.697  START TEST dd_flag_directory
00:26:29.697  ************************************
00:26:29.697   00:00:00	-- common/autotest_common.sh@1114 -- # directory
00:26:29.697   00:00:00	-- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:29.697   00:00:00	-- common/autotest_common.sh@650 -- # local es=0
00:26:29.697   00:00:00	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:29.697   00:00:00	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:29.697   00:00:00	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:29.697    00:00:00	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:29.697   00:00:00	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:29.697    00:00:00	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:29.697   00:00:00	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:29.697   00:00:00	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:29.697   00:00:00	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:29.697   00:00:00	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:29.697  [2024-12-14 00:00:00.140308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:29.697  [2024-12-14 00:00:00.140694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134087 ]
00:26:29.697  [2024-12-14 00:00:00.309545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:29.957  [2024-12-14 00:00:00.489160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:30.216  [2024-12-14 00:00:00.740511] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:30.216  [2024-12-14 00:00:00.740833] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:30.216  [2024-12-14 00:00:00.740997] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:30.785  [2024-12-14 00:00:01.322730] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:31.044   00:00:01	-- common/autotest_common.sh@653 -- # es=236
00:26:31.044   00:00:01	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:31.044   00:00:01	-- common/autotest_common.sh@662 -- # es=108
00:26:31.044   00:00:01	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:31.044   00:00:01	-- common/autotest_common.sh@670 -- # es=1
00:26:31.044   00:00:01	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:31.044   00:00:01	-- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:31.044   00:00:01	-- common/autotest_common.sh@650 -- # local es=0
00:26:31.044   00:00:01	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:31.044   00:00:01	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:31.045   00:00:01	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:31.045    00:00:01	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:31.045   00:00:01	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:31.045    00:00:01	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:31.045   00:00:01	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:31.045   00:00:01	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:31.045   00:00:01	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:31.045   00:00:01	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:31.045  [2024-12-14 00:00:01.718122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:31.045  [2024-12-14 00:00:01.718339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134115 ]
00:26:31.304  [2024-12-14 00:00:01.885902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:31.562  [2024-12-14 00:00:02.057715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:31.822  [2024-12-14 00:00:02.310513] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:31.822  [2024-12-14 00:00:02.310825] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:31.822  [2024-12-14 00:00:02.310969] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:32.390  [2024-12-14 00:00:02.896637] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:32.649   00:00:03	-- common/autotest_common.sh@653 -- # es=236
00:26:32.649   00:00:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:32.649   00:00:03	-- common/autotest_common.sh@662 -- # es=108
00:26:32.649   00:00:03	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:32.649   00:00:03	-- common/autotest_common.sh@670 -- # es=1
00:26:32.649   00:00:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:32.649  
00:26:32.649  real	0m3.151s
00:26:32.649  user	0m2.493s
00:26:32.649  sys	0m0.454s
00:26:32.649   00:00:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:32.649   00:00:03	-- common/autotest_common.sh@10 -- # set +x
00:26:32.649  ************************************
00:26:32.649  END TEST dd_flag_directory
00:26:32.649  ************************************
00:26:32.649   00:00:03	-- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow
00:26:32.649   00:00:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:32.649   00:00:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:32.649   00:00:03	-- common/autotest_common.sh@10 -- # set +x
00:26:32.649  ************************************
00:26:32.649  START TEST dd_flag_nofollow
00:26:32.649  ************************************
00:26:32.649   00:00:03	-- common/autotest_common.sh@1114 -- # nofollow
00:26:32.649   00:00:03	-- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:26:32.649   00:00:03	-- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:26:32.649   00:00:03	-- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:26:32.649   00:00:03	-- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:26:32.649   00:00:03	-- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:32.649   00:00:03	-- common/autotest_common.sh@650 -- # local es=0
00:26:32.649   00:00:03	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:32.649   00:00:03	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:32.649   00:00:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:32.649    00:00:03	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:32.649   00:00:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:32.650    00:00:03	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:32.650   00:00:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:32.650   00:00:03	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:32.650   00:00:03	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:32.650   00:00:03	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:32.650  [2024-12-14 00:00:03.353078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:32.650  [2024-12-14 00:00:03.353444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134153 ]
00:26:32.909  [2024-12-14 00:00:03.517024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:33.168  [2024-12-14 00:00:03.705938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:33.427  [2024-12-14 00:00:03.961093] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:26:33.427  [2024-12-14 00:00:03.961400] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:26:33.427  [2024-12-14 00:00:03.961568] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:33.994  [2024-12-14 00:00:04.541278] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:34.254   00:00:04	-- common/autotest_common.sh@653 -- # es=216
00:26:34.254   00:00:04	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:34.254   00:00:04	-- common/autotest_common.sh@662 -- # es=88
00:26:34.254   00:00:04	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:34.254   00:00:04	-- common/autotest_common.sh@670 -- # es=1
00:26:34.254   00:00:04	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:34.254   00:00:04	-- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:26:34.254   00:00:04	-- common/autotest_common.sh@650 -- # local es=0
00:26:34.254   00:00:04	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:26:34.254   00:00:04	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:34.254   00:00:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:34.254    00:00:04	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:34.254   00:00:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:34.254    00:00:04	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:34.254   00:00:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:34.254   00:00:04	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:34.254   00:00:04	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:34.254   00:00:04	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:26:34.254  [2024-12-14 00:00:04.941968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:34.254  [2024-12-14 00:00:04.942318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134185 ]
00:26:34.515  [2024-12-14 00:00:05.109813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:34.775  [2024-12-14 00:00:05.304590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:35.034  [2024-12-14 00:00:05.557919] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:26:35.034  [2024-12-14 00:00:05.558217] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:26:35.034  [2024-12-14 00:00:05.558283] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:35.603  [2024-12-14 00:00:06.148047] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:35.863   00:00:06	-- common/autotest_common.sh@653 -- # es=216
00:26:35.863   00:00:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:35.863   00:00:06	-- common/autotest_common.sh@662 -- # es=88
00:26:35.863   00:00:06	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:35.863   00:00:06	-- common/autotest_common.sh@670 -- # es=1
00:26:35.863   00:00:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:35.863   00:00:06	-- dd/posix.sh@46 -- # gen_bytes 512
00:26:35.863   00:00:06	-- dd/common.sh@98 -- # xtrace_disable
00:26:35.863   00:00:06	-- common/autotest_common.sh@10 -- # set +x
00:26:35.863   00:00:06	-- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:35.863  [2024-12-14 00:00:06.573643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:35.863  [2024-12-14 00:00:06.573860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134267 ]
00:26:36.123  [2024-12-14 00:00:06.745982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:36.398  [2024-12-14 00:00:06.930116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:36.670  
[2024-12-14T00:00:08.340Z] Copying: 512/512 [B] (average 500 kBps)
00:26:37.608  
00:26:37.608  ************************************
00:26:37.608  END TEST dd_flag_nofollow
00:26:37.608  ************************************
00:26:37.608   00:00:08	-- dd/posix.sh@49 -- # [[ mgt8ey0pjrzp03qaxqp32ze9srzqqvd3awc23n32vjs56jil81vhpd7uanla118fxef91ddfog69tn9rij73n3boeeco357ldnsommu6g45h4jttg6vjru65q8jtdoddctbbd6u6t2j6eic4g7zr9o4uik8s3m7amr79r3469ohtndfik33tle2shav5wtw9guj5x9k0crg7nllty07exo4rfmho8r0hm9dim3roszxfpp401tcbwepmm5zbowcxk5k5lfxs1ffgn1wnf86nyqm6ht4fgieba8gab733mjkzaxbrc42rtj4yyrrxm0qzk2akk17p9tqrz235qsnrvy4wnna82qdpm2jnd20tmtxi6lfjgj9rn5f43n92lv7diw4s9ghsk1vs4rd4099kjje8md53q861eb29bjvmapseai4scd4qh0v7f695ctijuhevri5alcfv0758hthgy0jnbkupfsjfv1y7w5de8r4uh1c3p2ix0plfy0ygn262 == \m\g\t\8\e\y\0\p\j\r\z\p\0\3\q\a\x\q\p\3\2\z\e\9\s\r\z\q\q\v\d\3\a\w\c\2\3\n\3\2\v\j\s\5\6\j\i\l\8\1\v\h\p\d\7\u\a\n\l\a\1\1\8\f\x\e\f\9\1\d\d\f\o\g\6\9\t\n\9\r\i\j\7\3\n\3\b\o\e\e\c\o\3\5\7\l\d\n\s\o\m\m\u\6\g\4\5\h\4\j\t\t\g\6\v\j\r\u\6\5\q\8\j\t\d\o\d\d\c\t\b\b\d\6\u\6\t\2\j\6\e\i\c\4\g\7\z\r\9\o\4\u\i\k\8\s\3\m\7\a\m\r\7\9\r\3\4\6\9\o\h\t\n\d\f\i\k\3\3\t\l\e\2\s\h\a\v\5\w\t\w\9\g\u\j\5\x\9\k\0\c\r\g\7\n\l\l\t\y\0\7\e\x\o\4\r\f\m\h\o\8\r\0\h\m\9\d\i\m\3\r\o\s\z\x\f\p\p\4\0\1\t\c\b\w\e\p\m\m\5\z\b\o\w\c\x\k\5\k\5\l\f\x\s\1\f\f\g\n\1\w\n\f\8\6\n\y\q\m\6\h\t\4\f\g\i\e\b\a\8\g\a\b\7\3\3\m\j\k\z\a\x\b\r\c\4\2\r\t\j\4\y\y\r\r\x\m\0\q\z\k\2\a\k\k\1\7\p\9\t\q\r\z\2\3\5\q\s\n\r\v\y\4\w\n\n\a\8\2\q\d\p\m\2\j\n\d\2\0\t\m\t\x\i\6\l\f\j\g\j\9\r\n\5\f\4\3\n\9\2\l\v\7\d\i\w\4\s\9\g\h\s\k\1\v\s\4\r\d\4\0\9\9\k\j\j\e\8\m\d\5\3\q\8\6\1\e\b\2\9\b\j\v\m\a\p\s\e\a\i\4\s\c\d\4\q\h\0\v\7\f\6\9\5\c\t\i\j\u\h\e\v\r\i\5\a\l\c\f\v\0\7\5\8\h\t\h\g\y\0\j\n\b\k\u\p\f\s\j\f\v\1\y\7\w\5\d\e\8\r\4\u\h\1\c\3\p\2\i\x\0\p\l\f\y\0\y\g\n\2\6\2 ]]
00:26:37.608  
00:26:37.608  real	0m4.865s
00:26:37.608  user	0m3.872s
00:26:37.608  sys	0m0.658s
00:26:37.608   00:00:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:37.608   00:00:08	-- common/autotest_common.sh@10 -- # set +x
00:26:37.608   00:00:08	-- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime
00:26:37.608   00:00:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:37.608   00:00:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:37.608   00:00:08	-- common/autotest_common.sh@10 -- # set +x
00:26:37.608  ************************************
00:26:37.608  START TEST dd_flag_noatime
00:26:37.608  ************************************
00:26:37.608   00:00:08	-- common/autotest_common.sh@1114 -- # noatime
00:26:37.608   00:00:08	-- dd/posix.sh@53 -- # local atime_if
00:26:37.608   00:00:08	-- dd/posix.sh@54 -- # local atime_of
00:26:37.608   00:00:08	-- dd/posix.sh@58 -- # gen_bytes 512
00:26:37.608   00:00:08	-- dd/common.sh@98 -- # xtrace_disable
00:26:37.608   00:00:08	-- common/autotest_common.sh@10 -- # set +x
00:26:37.608    00:00:08	-- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:37.608   00:00:08	-- dd/posix.sh@60 -- # atime_if=1734134407
00:26:37.608    00:00:08	-- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:37.608   00:00:08	-- dd/posix.sh@61 -- # atime_of=1734134408
00:26:37.608   00:00:08	-- dd/posix.sh@66 -- # sleep 1
00:26:38.546   00:00:09	-- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:38.805  [2024-12-14 00:00:09.307488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:38.805  [2024-12-14 00:00:09.308458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134331 ]
00:26:38.805  [2024-12-14 00:00:09.482030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:39.064  [2024-12-14 00:00:09.693761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:39.323  
[2024-12-14T00:00:10.992Z] Copying: 512/512 [B] (average 500 kBps)
00:26:40.260  
00:26:40.260    00:00:10	-- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:40.260   00:00:10	-- dd/posix.sh@69 -- # (( atime_if == 1734134407 ))
00:26:40.260    00:00:10	-- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:40.520   00:00:10	-- dd/posix.sh@70 -- # (( atime_of == 1734134408 ))
00:26:40.520   00:00:10	-- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:40.520  [2024-12-14 00:00:11.056177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:40.520  [2024-12-14 00:00:11.056861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134358 ]
00:26:40.520  [2024-12-14 00:00:11.224133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:40.779  [2024-12-14 00:00:11.395371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:41.038  
[2024-12-14T00:00:12.709Z] Copying: 512/512 [B] (average 500 kBps)
00:26:41.977  
00:26:41.977    00:00:12	-- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:41.977  ************************************
00:26:41.977  END TEST dd_flag_noatime
00:26:41.977  ************************************
00:26:41.977   00:00:12	-- dd/posix.sh@73 -- # (( atime_if < 1734134411 ))
00:26:41.977  
00:26:41.977  real	0m4.407s
00:26:41.977  user	0m2.624s
00:26:41.977  sys	0m0.514s
00:26:41.977   00:00:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:41.977   00:00:12	-- common/autotest_common.sh@10 -- # set +x
00:26:41.977   00:00:12	-- dd/posix.sh@106 -- # run_test dd_flags_misc io
00:26:41.977   00:00:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:41.977   00:00:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:41.977   00:00:12	-- common/autotest_common.sh@10 -- # set +x
00:26:41.977  ************************************
00:26:41.977  START TEST dd_flags_misc
00:26:41.977  ************************************
00:26:41.977   00:00:12	-- common/autotest_common.sh@1114 -- # io
00:26:41.977   00:00:12	-- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:26:41.977   00:00:12	-- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:26:41.977   00:00:12	-- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:26:41.977   00:00:12	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:26:41.977   00:00:12	-- dd/posix.sh@86 -- # gen_bytes 512
00:26:41.977   00:00:12	-- dd/common.sh@98 -- # xtrace_disable
00:26:41.977   00:00:12	-- common/autotest_common.sh@10 -- # set +x
00:26:41.977   00:00:12	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:41.977   00:00:12	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:26:42.236  [2024-12-14 00:00:12.744059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:42.236  [2024-12-14 00:00:12.744262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134401 ]
00:26:42.236  [2024-12-14 00:00:12.907733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:42.495  [2024-12-14 00:00:13.078011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:42.754  
[2024-12-14T00:00:14.424Z] Copying: 512/512 [B] (average 500 kBps)
00:26:43.692  
00:26:43.692   00:00:14	-- dd/posix.sh@93 -- # [[ 3b7gweeb5rbf6ovndy9b2dedseq6oxac4rl4qjhhqzawqtgs635ax4zoltorrhg4r971t55734kr7gi6mj7vkl5c4xk4jhec3ssnauvuajzuf5u7utal6e75rhpd8sf3v1ju86dfjy6f8nopnool27dd5yg66k0gvize418ginn5glngww25gx5zpopfit9f0vftgdubct5z9ejbl540us6qpz371t9cncq0m5ysj5yyyfxj41y39sg4dqj2p88zic4w0d9xfchy6a4bou2dzgznp7uz3tm8nbw068go71kwd9n5jlsdwn00e3gj8lmg95btwdfndq83hdd7wup4uwppcy1g86qnz3w126i5gwn7ke00so5m2y5jgb67n426mwqz6njdhmuk6lvd6x2sxlcb0dptjss9t78yttyqbl2g9dsegb6xmhgtbbw6dl40rnaofa5od6ecxeteip6yfkrcyxyzg3gvgv4jrlcvfh7k72n68dd3ff76pt83uwup == \3\b\7\g\w\e\e\b\5\r\b\f\6\o\v\n\d\y\9\b\2\d\e\d\s\e\q\6\o\x\a\c\4\r\l\4\q\j\h\h\q\z\a\w\q\t\g\s\6\3\5\a\x\4\z\o\l\t\o\r\r\h\g\4\r\9\7\1\t\5\5\7\3\4\k\r\7\g\i\6\m\j\7\v\k\l\5\c\4\x\k\4\j\h\e\c\3\s\s\n\a\u\v\u\a\j\z\u\f\5\u\7\u\t\a\l\6\e\7\5\r\h\p\d\8\s\f\3\v\1\j\u\8\6\d\f\j\y\6\f\8\n\o\p\n\o\o\l\2\7\d\d\5\y\g\6\6\k\0\g\v\i\z\e\4\1\8\g\i\n\n\5\g\l\n\g\w\w\2\5\g\x\5\z\p\o\p\f\i\t\9\f\0\v\f\t\g\d\u\b\c\t\5\z\9\e\j\b\l\5\4\0\u\s\6\q\p\z\3\7\1\t\9\c\n\c\q\0\m\5\y\s\j\5\y\y\y\f\x\j\4\1\y\3\9\s\g\4\d\q\j\2\p\8\8\z\i\c\4\w\0\d\9\x\f\c\h\y\6\a\4\b\o\u\2\d\z\g\z\n\p\7\u\z\3\t\m\8\n\b\w\0\6\8\g\o\7\1\k\w\d\9\n\5\j\l\s\d\w\n\0\0\e\3\g\j\8\l\m\g\9\5\b\t\w\d\f\n\d\q\8\3\h\d\d\7\w\u\p\4\u\w\p\p\c\y\1\g\8\6\q\n\z\3\w\1\2\6\i\5\g\w\n\7\k\e\0\0\s\o\5\m\2\y\5\j\g\b\6\7\n\4\2\6\m\w\q\z\6\n\j\d\h\m\u\k\6\l\v\d\6\x\2\s\x\l\c\b\0\d\p\t\j\s\s\9\t\7\8\y\t\t\y\q\b\l\2\g\9\d\s\e\g\b\6\x\m\h\g\t\b\b\w\6\d\l\4\0\r\n\a\o\f\a\5\o\d\6\e\c\x\e\t\e\i\p\6\y\f\k\r\c\y\x\y\z\g\3\g\v\g\v\4\j\r\l\c\v\f\h\7\k\7\2\n\6\8\d\d\3\f\f\7\6\p\t\8\3\u\w\u\p ]]
00:26:43.692   00:00:14	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:43.692   00:00:14	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:26:43.692  [2024-12-14 00:00:14.352435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:43.692  [2024-12-14 00:00:14.352639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134427 ]
00:26:43.951  [2024-12-14 00:00:14.520029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:44.210  [2024-12-14 00:00:14.690380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:44.471  
[2024-12-14T00:00:16.140Z] Copying: 512/512 [B] (average 500 kBps)
00:26:45.408  
00:26:45.408   00:00:15	-- dd/posix.sh@93 -- # [[ 3b7gweeb5rbf6ovndy9b2dedseq6oxac4rl4qjhhqzawqtgs635ax4zoltorrhg4r971t55734kr7gi6mj7vkl5c4xk4jhec3ssnauvuajzuf5u7utal6e75rhpd8sf3v1ju86dfjy6f8nopnool27dd5yg66k0gvize418ginn5glngww25gx5zpopfit9f0vftgdubct5z9ejbl540us6qpz371t9cncq0m5ysj5yyyfxj41y39sg4dqj2p88zic4w0d9xfchy6a4bou2dzgznp7uz3tm8nbw068go71kwd9n5jlsdwn00e3gj8lmg95btwdfndq83hdd7wup4uwppcy1g86qnz3w126i5gwn7ke00so5m2y5jgb67n426mwqz6njdhmuk6lvd6x2sxlcb0dptjss9t78yttyqbl2g9dsegb6xmhgtbbw6dl40rnaofa5od6ecxeteip6yfkrcyxyzg3gvgv4jrlcvfh7k72n68dd3ff76pt83uwup == \3\b\7\g\w\e\e\b\5\r\b\f\6\o\v\n\d\y\9\b\2\d\e\d\s\e\q\6\o\x\a\c\4\r\l\4\q\j\h\h\q\z\a\w\q\t\g\s\6\3\5\a\x\4\z\o\l\t\o\r\r\h\g\4\r\9\7\1\t\5\5\7\3\4\k\r\7\g\i\6\m\j\7\v\k\l\5\c\4\x\k\4\j\h\e\c\3\s\s\n\a\u\v\u\a\j\z\u\f\5\u\7\u\t\a\l\6\e\7\5\r\h\p\d\8\s\f\3\v\1\j\u\8\6\d\f\j\y\6\f\8\n\o\p\n\o\o\l\2\7\d\d\5\y\g\6\6\k\0\g\v\i\z\e\4\1\8\g\i\n\n\5\g\l\n\g\w\w\2\5\g\x\5\z\p\o\p\f\i\t\9\f\0\v\f\t\g\d\u\b\c\t\5\z\9\e\j\b\l\5\4\0\u\s\6\q\p\z\3\7\1\t\9\c\n\c\q\0\m\5\y\s\j\5\y\y\y\f\x\j\4\1\y\3\9\s\g\4\d\q\j\2\p\8\8\z\i\c\4\w\0\d\9\x\f\c\h\y\6\a\4\b\o\u\2\d\z\g\z\n\p\7\u\z\3\t\m\8\n\b\w\0\6\8\g\o\7\1\k\w\d\9\n\5\j\l\s\d\w\n\0\0\e\3\g\j\8\l\m\g\9\5\b\t\w\d\f\n\d\q\8\3\h\d\d\7\w\u\p\4\u\w\p\p\c\y\1\g\8\6\q\n\z\3\w\1\2\6\i\5\g\w\n\7\k\e\0\0\s\o\5\m\2\y\5\j\g\b\6\7\n\4\2\6\m\w\q\z\6\n\j\d\h\m\u\k\6\l\v\d\6\x\2\s\x\l\c\b\0\d\p\t\j\s\s\9\t\7\8\y\t\t\y\q\b\l\2\g\9\d\s\e\g\b\6\x\m\h\g\t\b\b\w\6\d\l\4\0\r\n\a\o\f\a\5\o\d\6\e\c\x\e\t\e\i\p\6\y\f\k\r\c\y\x\y\z\g\3\g\v\g\v\4\j\r\l\c\v\f\h\7\k\7\2\n\6\8\d\d\3\f\f\7\6\p\t\8\3\u\w\u\p ]]
00:26:45.408   00:00:15	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:45.408   00:00:15	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:26:45.408  [2024-12-14 00:00:15.964656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:45.408  [2024-12-14 00:00:15.964818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134453 ]
00:26:45.408  [2024-12-14 00:00:16.118821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:45.667  [2024-12-14 00:00:16.278771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:45.926  
[2024-12-14T00:00:17.596Z] Copying: 512/512 [B] (average 166 kBps)
00:26:46.864  
00:26:46.864   00:00:17	-- dd/posix.sh@93 -- # [[ 3b7gweeb5rbf6ovndy9b2dedseq6oxac4rl4qjhhqzawqtgs635ax4zoltorrhg4r971t55734kr7gi6mj7vkl5c4xk4jhec3ssnauvuajzuf5u7utal6e75rhpd8sf3v1ju86dfjy6f8nopnool27dd5yg66k0gvize418ginn5glngww25gx5zpopfit9f0vftgdubct5z9ejbl540us6qpz371t9cncq0m5ysj5yyyfxj41y39sg4dqj2p88zic4w0d9xfchy6a4bou2dzgznp7uz3tm8nbw068go71kwd9n5jlsdwn00e3gj8lmg95btwdfndq83hdd7wup4uwppcy1g86qnz3w126i5gwn7ke00so5m2y5jgb67n426mwqz6njdhmuk6lvd6x2sxlcb0dptjss9t78yttyqbl2g9dsegb6xmhgtbbw6dl40rnaofa5od6ecxeteip6yfkrcyxyzg3gvgv4jrlcvfh7k72n68dd3ff76pt83uwup == \3\b\7\g\w\e\e\b\5\r\b\f\6\o\v\n\d\y\9\b\2\d\e\d\s\e\q\6\o\x\a\c\4\r\l\4\q\j\h\h\q\z\a\w\q\t\g\s\6\3\5\a\x\4\z\o\l\t\o\r\r\h\g\4\r\9\7\1\t\5\5\7\3\4\k\r\7\g\i\6\m\j\7\v\k\l\5\c\4\x\k\4\j\h\e\c\3\s\s\n\a\u\v\u\a\j\z\u\f\5\u\7\u\t\a\l\6\e\7\5\r\h\p\d\8\s\f\3\v\1\j\u\8\6\d\f\j\y\6\f\8\n\o\p\n\o\o\l\2\7\d\d\5\y\g\6\6\k\0\g\v\i\z\e\4\1\8\g\i\n\n\5\g\l\n\g\w\w\2\5\g\x\5\z\p\o\p\f\i\t\9\f\0\v\f\t\g\d\u\b\c\t\5\z\9\e\j\b\l\5\4\0\u\s\6\q\p\z\3\7\1\t\9\c\n\c\q\0\m\5\y\s\j\5\y\y\y\f\x\j\4\1\y\3\9\s\g\4\d\q\j\2\p\8\8\z\i\c\4\w\0\d\9\x\f\c\h\y\6\a\4\b\o\u\2\d\z\g\z\n\p\7\u\z\3\t\m\8\n\b\w\0\6\8\g\o\7\1\k\w\d\9\n\5\j\l\s\d\w\n\0\0\e\3\g\j\8\l\m\g\9\5\b\t\w\d\f\n\d\q\8\3\h\d\d\7\w\u\p\4\u\w\p\p\c\y\1\g\8\6\q\n\z\3\w\1\2\6\i\5\g\w\n\7\k\e\0\0\s\o\5\m\2\y\5\j\g\b\6\7\n\4\2\6\m\w\q\z\6\n\j\d\h\m\u\k\6\l\v\d\6\x\2\s\x\l\c\b\0\d\p\t\j\s\s\9\t\7\8\y\t\t\y\q\b\l\2\g\9\d\s\e\g\b\6\x\m\h\g\t\b\b\w\6\d\l\4\0\r\n\a\o\f\a\5\o\d\6\e\c\x\e\t\e\i\p\6\y\f\k\r\c\y\x\y\z\g\3\g\v\g\v\4\j\r\l\c\v\f\h\7\k\7\2\n\6\8\d\d\3\f\f\7\6\p\t\8\3\u\w\u\p ]]
00:26:46.864   00:00:17	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:46.864   00:00:17	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:26:46.864  [2024-12-14 00:00:17.563299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:46.864  [2024-12-14 00:00:17.563528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134475 ]
00:26:47.124  [2024-12-14 00:00:17.731084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:47.381  [2024-12-14 00:00:17.914761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:47.640  
[2024-12-14T00:00:19.326Z] Copying: 512/512 [B] (average 100 kBps)
00:26:48.594  
00:26:48.594   00:00:19	-- dd/posix.sh@93 -- # [[ 3b7gweeb5rbf6ovndy9b2dedseq6oxac4rl4qjhhqzawqtgs635ax4zoltorrhg4r971t55734kr7gi6mj7vkl5c4xk4jhec3ssnauvuajzuf5u7utal6e75rhpd8sf3v1ju86dfjy6f8nopnool27dd5yg66k0gvize418ginn5glngww25gx5zpopfit9f0vftgdubct5z9ejbl540us6qpz371t9cncq0m5ysj5yyyfxj41y39sg4dqj2p88zic4w0d9xfchy6a4bou2dzgznp7uz3tm8nbw068go71kwd9n5jlsdwn00e3gj8lmg95btwdfndq83hdd7wup4uwppcy1g86qnz3w126i5gwn7ke00so5m2y5jgb67n426mwqz6njdhmuk6lvd6x2sxlcb0dptjss9t78yttyqbl2g9dsegb6xmhgtbbw6dl40rnaofa5od6ecxeteip6yfkrcyxyzg3gvgv4jrlcvfh7k72n68dd3ff76pt83uwup == \3\b\7\g\w\e\e\b\5\r\b\f\6\o\v\n\d\y\9\b\2\d\e\d\s\e\q\6\o\x\a\c\4\r\l\4\q\j\h\h\q\z\a\w\q\t\g\s\6\3\5\a\x\4\z\o\l\t\o\r\r\h\g\4\r\9\7\1\t\5\5\7\3\4\k\r\7\g\i\6\m\j\7\v\k\l\5\c\4\x\k\4\j\h\e\c\3\s\s\n\a\u\v\u\a\j\z\u\f\5\u\7\u\t\a\l\6\e\7\5\r\h\p\d\8\s\f\3\v\1\j\u\8\6\d\f\j\y\6\f\8\n\o\p\n\o\o\l\2\7\d\d\5\y\g\6\6\k\0\g\v\i\z\e\4\1\8\g\i\n\n\5\g\l\n\g\w\w\2\5\g\x\5\z\p\o\p\f\i\t\9\f\0\v\f\t\g\d\u\b\c\t\5\z\9\e\j\b\l\5\4\0\u\s\6\q\p\z\3\7\1\t\9\c\n\c\q\0\m\5\y\s\j\5\y\y\y\f\x\j\4\1\y\3\9\s\g\4\d\q\j\2\p\8\8\z\i\c\4\w\0\d\9\x\f\c\h\y\6\a\4\b\o\u\2\d\z\g\z\n\p\7\u\z\3\t\m\8\n\b\w\0\6\8\g\o\7\1\k\w\d\9\n\5\j\l\s\d\w\n\0\0\e\3\g\j\8\l\m\g\9\5\b\t\w\d\f\n\d\q\8\3\h\d\d\7\w\u\p\4\u\w\p\p\c\y\1\g\8\6\q\n\z\3\w\1\2\6\i\5\g\w\n\7\k\e\0\0\s\o\5\m\2\y\5\j\g\b\6\7\n\4\2\6\m\w\q\z\6\n\j\d\h\m\u\k\6\l\v\d\6\x\2\s\x\l\c\b\0\d\p\t\j\s\s\9\t\7\8\y\t\t\y\q\b\l\2\g\9\d\s\e\g\b\6\x\m\h\g\t\b\b\w\6\d\l\4\0\r\n\a\o\f\a\5\o\d\6\e\c\x\e\t\e\i\p\6\y\f\k\r\c\y\x\y\z\g\3\g\v\g\v\4\j\r\l\c\v\f\h\7\k\7\2\n\6\8\d\d\3\f\f\7\6\p\t\8\3\u\w\u\p ]]
00:26:48.594   00:00:19	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:26:48.594   00:00:19	-- dd/posix.sh@86 -- # gen_bytes 512
00:26:48.594   00:00:19	-- dd/common.sh@98 -- # xtrace_disable
00:26:48.594   00:00:19	-- common/autotest_common.sh@10 -- # set +x
00:26:48.594   00:00:19	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:48.594   00:00:19	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:26:48.594  [2024-12-14 00:00:19.280004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:48.594  [2024-12-14 00:00:19.280214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134499 ]
00:26:48.852  [2024-12-14 00:00:19.446906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:49.111  [2024-12-14 00:00:19.633339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:49.369  
[2024-12-14T00:00:21.036Z] Copying: 512/512 [B] (average 500 kBps)
00:26:50.304  
00:26:50.304   00:00:20	-- dd/posix.sh@93 -- # [[ ks2ibz26f3augyngmaffv3avfd7w8xj8w8km0sgk111wm10eidk1j22mk2wh5hwfvxz0mzy5sq1frkjgra7rhfzi8sui2w00uubfx3ph3w80i4geqnibe5e92boosp3uwhr5vfsttsgxztuj77vs4er74ol1hl7ffau4uwij3420rx13b74qal4x2v0ug7az14twp41i8yb38vd6nrvtkf99m15lpjtkacnwu8a73mvghunso7awmat9dlfjp0mt1y5t75nsmkiosbq0zsfrj1buhitp6n8l4wvgb9m7z1c1wi2ym8ulawhcv9vf2dfbbsjgylozf7ptyxyww4yv3njoy6bbidq2ttenunhfwrmqrnz2m0caxx0v97dsng792aik9lbgnhc0l80g3q8p3pdiifbtd02bbzdlx2expxrnznlb53i4ihwhjsbgks1o6h0uiosw1jrekcaxr82thmkzkny6recnit3k6maogkimqe4k6gq2ne4jfssq7vwk == \k\s\2\i\b\z\2\6\f\3\a\u\g\y\n\g\m\a\f\f\v\3\a\v\f\d\7\w\8\x\j\8\w\8\k\m\0\s\g\k\1\1\1\w\m\1\0\e\i\d\k\1\j\2\2\m\k\2\w\h\5\h\w\f\v\x\z\0\m\z\y\5\s\q\1\f\r\k\j\g\r\a\7\r\h\f\z\i\8\s\u\i\2\w\0\0\u\u\b\f\x\3\p\h\3\w\8\0\i\4\g\e\q\n\i\b\e\5\e\9\2\b\o\o\s\p\3\u\w\h\r\5\v\f\s\t\t\s\g\x\z\t\u\j\7\7\v\s\4\e\r\7\4\o\l\1\h\l\7\f\f\a\u\4\u\w\i\j\3\4\2\0\r\x\1\3\b\7\4\q\a\l\4\x\2\v\0\u\g\7\a\z\1\4\t\w\p\4\1\i\8\y\b\3\8\v\d\6\n\r\v\t\k\f\9\9\m\1\5\l\p\j\t\k\a\c\n\w\u\8\a\7\3\m\v\g\h\u\n\s\o\7\a\w\m\a\t\9\d\l\f\j\p\0\m\t\1\y\5\t\7\5\n\s\m\k\i\o\s\b\q\0\z\s\f\r\j\1\b\u\h\i\t\p\6\n\8\l\4\w\v\g\b\9\m\7\z\1\c\1\w\i\2\y\m\8\u\l\a\w\h\c\v\9\v\f\2\d\f\b\b\s\j\g\y\l\o\z\f\7\p\t\y\x\y\w\w\4\y\v\3\n\j\o\y\6\b\b\i\d\q\2\t\t\e\n\u\n\h\f\w\r\m\q\r\n\z\2\m\0\c\a\x\x\0\v\9\7\d\s\n\g\7\9\2\a\i\k\9\l\b\g\n\h\c\0\l\8\0\g\3\q\8\p\3\p\d\i\i\f\b\t\d\0\2\b\b\z\d\l\x\2\e\x\p\x\r\n\z\n\l\b\5\3\i\4\i\h\w\h\j\s\b\g\k\s\1\o\6\h\0\u\i\o\s\w\1\j\r\e\k\c\a\x\r\8\2\t\h\m\k\z\k\n\y\6\r\e\c\n\i\t\3\k\6\m\a\o\g\k\i\m\q\e\4\k\6\g\q\2\n\e\4\j\f\s\s\q\7\v\w\k ]]
00:26:50.304   00:00:20	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:50.304   00:00:20	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:26:50.304  [2024-12-14 00:00:21.026524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:50.304  [2024-12-14 00:00:21.026712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134527 ]
00:26:50.562  [2024-12-14 00:00:21.194149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:50.821  [2024-12-14 00:00:21.374409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:51.079  
[2024-12-14T00:00:22.747Z] Copying: 512/512 [B] (average 500 kBps)
00:26:52.015  
00:26:52.015   00:00:22	-- dd/posix.sh@93 -- # [[ ks2ibz26f3augyngmaffv3avfd7w8xj8w8km0sgk111wm10eidk1j22mk2wh5hwfvxz0mzy5sq1frkjgra7rhfzi8sui2w00uubfx3ph3w80i4geqnibe5e92boosp3uwhr5vfsttsgxztuj77vs4er74ol1hl7ffau4uwij3420rx13b74qal4x2v0ug7az14twp41i8yb38vd6nrvtkf99m15lpjtkacnwu8a73mvghunso7awmat9dlfjp0mt1y5t75nsmkiosbq0zsfrj1buhitp6n8l4wvgb9m7z1c1wi2ym8ulawhcv9vf2dfbbsjgylozf7ptyxyww4yv3njoy6bbidq2ttenunhfwrmqrnz2m0caxx0v97dsng792aik9lbgnhc0l80g3q8p3pdiifbtd02bbzdlx2expxrnznlb53i4ihwhjsbgks1o6h0uiosw1jrekcaxr82thmkzkny6recnit3k6maogkimqe4k6gq2ne4jfssq7vwk == \k\s\2\i\b\z\2\6\f\3\a\u\g\y\n\g\m\a\f\f\v\3\a\v\f\d\7\w\8\x\j\8\w\8\k\m\0\s\g\k\1\1\1\w\m\1\0\e\i\d\k\1\j\2\2\m\k\2\w\h\5\h\w\f\v\x\z\0\m\z\y\5\s\q\1\f\r\k\j\g\r\a\7\r\h\f\z\i\8\s\u\i\2\w\0\0\u\u\b\f\x\3\p\h\3\w\8\0\i\4\g\e\q\n\i\b\e\5\e\9\2\b\o\o\s\p\3\u\w\h\r\5\v\f\s\t\t\s\g\x\z\t\u\j\7\7\v\s\4\e\r\7\4\o\l\1\h\l\7\f\f\a\u\4\u\w\i\j\3\4\2\0\r\x\1\3\b\7\4\q\a\l\4\x\2\v\0\u\g\7\a\z\1\4\t\w\p\4\1\i\8\y\b\3\8\v\d\6\n\r\v\t\k\f\9\9\m\1\5\l\p\j\t\k\a\c\n\w\u\8\a\7\3\m\v\g\h\u\n\s\o\7\a\w\m\a\t\9\d\l\f\j\p\0\m\t\1\y\5\t\7\5\n\s\m\k\i\o\s\b\q\0\z\s\f\r\j\1\b\u\h\i\t\p\6\n\8\l\4\w\v\g\b\9\m\7\z\1\c\1\w\i\2\y\m\8\u\l\a\w\h\c\v\9\v\f\2\d\f\b\b\s\j\g\y\l\o\z\f\7\p\t\y\x\y\w\w\4\y\v\3\n\j\o\y\6\b\b\i\d\q\2\t\t\e\n\u\n\h\f\w\r\m\q\r\n\z\2\m\0\c\a\x\x\0\v\9\7\d\s\n\g\7\9\2\a\i\k\9\l\b\g\n\h\c\0\l\8\0\g\3\q\8\p\3\p\d\i\i\f\b\t\d\0\2\b\b\z\d\l\x\2\e\x\p\x\r\n\z\n\l\b\5\3\i\4\i\h\w\h\j\s\b\g\k\s\1\o\6\h\0\u\i\o\s\w\1\j\r\e\k\c\a\x\r\8\2\t\h\m\k\z\k\n\y\6\r\e\c\n\i\t\3\k\6\m\a\o\g\k\i\m\q\e\4\k\6\g\q\2\n\e\4\j\f\s\s\q\7\v\w\k ]]
00:26:52.015   00:00:22	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:52.015   00:00:22	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:26:52.274  [2024-12-14 00:00:22.767220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:52.274  [2024-12-14 00:00:22.767415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134545 ]
00:26:52.274  [2024-12-14 00:00:22.937038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:52.533  [2024-12-14 00:00:23.123834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:52.791  
[2024-12-14T00:00:24.459Z] Copying: 512/512 [B] (average 100 kBps)
00:26:53.727  
00:26:53.727   00:00:24	-- dd/posix.sh@93 -- # [[ ks2ibz26f3augyngmaffv3avfd7w8xj8w8km0sgk111wm10eidk1j22mk2wh5hwfvxz0mzy5sq1frkjgra7rhfzi8sui2w00uubfx3ph3w80i4geqnibe5e92boosp3uwhr5vfsttsgxztuj77vs4er74ol1hl7ffau4uwij3420rx13b74qal4x2v0ug7az14twp41i8yb38vd6nrvtkf99m15lpjtkacnwu8a73mvghunso7awmat9dlfjp0mt1y5t75nsmkiosbq0zsfrj1buhitp6n8l4wvgb9m7z1c1wi2ym8ulawhcv9vf2dfbbsjgylozf7ptyxyww4yv3njoy6bbidq2ttenunhfwrmqrnz2m0caxx0v97dsng792aik9lbgnhc0l80g3q8p3pdiifbtd02bbzdlx2expxrnznlb53i4ihwhjsbgks1o6h0uiosw1jrekcaxr82thmkzkny6recnit3k6maogkimqe4k6gq2ne4jfssq7vwk == \k\s\2\i\b\z\2\6\f\3\a\u\g\y\n\g\m\a\f\f\v\3\a\v\f\d\7\w\8\x\j\8\w\8\k\m\0\s\g\k\1\1\1\w\m\1\0\e\i\d\k\1\j\2\2\m\k\2\w\h\5\h\w\f\v\x\z\0\m\z\y\5\s\q\1\f\r\k\j\g\r\a\7\r\h\f\z\i\8\s\u\i\2\w\0\0\u\u\b\f\x\3\p\h\3\w\8\0\i\4\g\e\q\n\i\b\e\5\e\9\2\b\o\o\s\p\3\u\w\h\r\5\v\f\s\t\t\s\g\x\z\t\u\j\7\7\v\s\4\e\r\7\4\o\l\1\h\l\7\f\f\a\u\4\u\w\i\j\3\4\2\0\r\x\1\3\b\7\4\q\a\l\4\x\2\v\0\u\g\7\a\z\1\4\t\w\p\4\1\i\8\y\b\3\8\v\d\6\n\r\v\t\k\f\9\9\m\1\5\l\p\j\t\k\a\c\n\w\u\8\a\7\3\m\v\g\h\u\n\s\o\7\a\w\m\a\t\9\d\l\f\j\p\0\m\t\1\y\5\t\7\5\n\s\m\k\i\o\s\b\q\0\z\s\f\r\j\1\b\u\h\i\t\p\6\n\8\l\4\w\v\g\b\9\m\7\z\1\c\1\w\i\2\y\m\8\u\l\a\w\h\c\v\9\v\f\2\d\f\b\b\s\j\g\y\l\o\z\f\7\p\t\y\x\y\w\w\4\y\v\3\n\j\o\y\6\b\b\i\d\q\2\t\t\e\n\u\n\h\f\w\r\m\q\r\n\z\2\m\0\c\a\x\x\0\v\9\7\d\s\n\g\7\9\2\a\i\k\9\l\b\g\n\h\c\0\l\8\0\g\3\q\8\p\3\p\d\i\i\f\b\t\d\0\2\b\b\z\d\l\x\2\e\x\p\x\r\n\z\n\l\b\5\3\i\4\i\h\w\h\j\s\b\g\k\s\1\o\6\h\0\u\i\o\s\w\1\j\r\e\k\c\a\x\r\8\2\t\h\m\k\z\k\n\y\6\r\e\c\n\i\t\3\k\6\m\a\o\g\k\i\m\q\e\4\k\6\g\q\2\n\e\4\j\f\s\s\q\7\v\w\k ]]
00:26:53.727   00:00:24	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:26:53.727   00:00:24	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:26:53.986  [2024-12-14 00:00:24.513978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:53.986  [2024-12-14 00:00:24.514196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134569 ]
00:26:53.986  [2024-12-14 00:00:24.676924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:54.244  [2024-12-14 00:00:24.852652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:54.503  
[2024-12-14T00:00:26.610Z] Copying: 512/512 [B] (average 166 kBps)
00:26:55.878  
00:26:55.878  ************************************
00:26:55.878  END TEST dd_flags_misc
00:26:55.878  ************************************
00:26:55.878   00:00:26	-- dd/posix.sh@93 -- # [[ ks2ibz26f3augyngmaffv3avfd7w8xj8w8km0sgk111wm10eidk1j22mk2wh5hwfvxz0mzy5sq1frkjgra7rhfzi8sui2w00uubfx3ph3w80i4geqnibe5e92boosp3uwhr5vfsttsgxztuj77vs4er74ol1hl7ffau4uwij3420rx13b74qal4x2v0ug7az14twp41i8yb38vd6nrvtkf99m15lpjtkacnwu8a73mvghunso7awmat9dlfjp0mt1y5t75nsmkiosbq0zsfrj1buhitp6n8l4wvgb9m7z1c1wi2ym8ulawhcv9vf2dfbbsjgylozf7ptyxyww4yv3njoy6bbidq2ttenunhfwrmqrnz2m0caxx0v97dsng792aik9lbgnhc0l80g3q8p3pdiifbtd02bbzdlx2expxrnznlb53i4ihwhjsbgks1o6h0uiosw1jrekcaxr82thmkzkny6recnit3k6maogkimqe4k6gq2ne4jfssq7vwk == \k\s\2\i\b\z\2\6\f\3\a\u\g\y\n\g\m\a\f\f\v\3\a\v\f\d\7\w\8\x\j\8\w\8\k\m\0\s\g\k\1\1\1\w\m\1\0\e\i\d\k\1\j\2\2\m\k\2\w\h\5\h\w\f\v\x\z\0\m\z\y\5\s\q\1\f\r\k\j\g\r\a\7\r\h\f\z\i\8\s\u\i\2\w\0\0\u\u\b\f\x\3\p\h\3\w\8\0\i\4\g\e\q\n\i\b\e\5\e\9\2\b\o\o\s\p\3\u\w\h\r\5\v\f\s\t\t\s\g\x\z\t\u\j\7\7\v\s\4\e\r\7\4\o\l\1\h\l\7\f\f\a\u\4\u\w\i\j\3\4\2\0\r\x\1\3\b\7\4\q\a\l\4\x\2\v\0\u\g\7\a\z\1\4\t\w\p\4\1\i\8\y\b\3\8\v\d\6\n\r\v\t\k\f\9\9\m\1\5\l\p\j\t\k\a\c\n\w\u\8\a\7\3\m\v\g\h\u\n\s\o\7\a\w\m\a\t\9\d\l\f\j\p\0\m\t\1\y\5\t\7\5\n\s\m\k\i\o\s\b\q\0\z\s\f\r\j\1\b\u\h\i\t\p\6\n\8\l\4\w\v\g\b\9\m\7\z\1\c\1\w\i\2\y\m\8\u\l\a\w\h\c\v\9\v\f\2\d\f\b\b\s\j\g\y\l\o\z\f\7\p\t\y\x\y\w\w\4\y\v\3\n\j\o\y\6\b\b\i\d\q\2\t\t\e\n\u\n\h\f\w\r\m\q\r\n\z\2\m\0\c\a\x\x\0\v\9\7\d\s\n\g\7\9\2\a\i\k\9\l\b\g\n\h\c\0\l\8\0\g\3\q\8\p\3\p\d\i\i\f\b\t\d\0\2\b\b\z\d\l\x\2\e\x\p\x\r\n\z\n\l\b\5\3\i\4\i\h\w\h\j\s\b\g\k\s\1\o\6\h\0\u\i\o\s\w\1\j\r\e\k\c\a\x\r\8\2\t\h\m\k\z\k\n\y\6\r\e\c\n\i\t\3\k\6\m\a\o\g\k\i\m\q\e\4\k\6\g\q\2\n\e\4\j\f\s\s\q\7\v\w\k ]]
00:26:55.878  
00:26:55.878  real	0m13.514s
00:26:55.878  user	0m10.329s
00:26:55.878  sys	0m2.104s
00:26:55.878   00:00:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:55.878   00:00:26	-- common/autotest_common.sh@10 -- # set +x
00:26:55.878   00:00:26	-- dd/posix.sh@131 -- # tests_forced_aio
00:26:55.878   00:00:26	-- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO'
00:26:55.878  * Second test run, using AIO
00:26:55.878   00:00:26	-- dd/posix.sh@113 -- # DD_APP+=("--aio")
00:26:55.878   00:00:26	-- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append
00:26:55.878   00:00:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:55.878   00:00:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:55.878   00:00:26	-- common/autotest_common.sh@10 -- # set +x
00:26:55.878  ************************************
00:26:55.878  START TEST dd_flag_append_forced_aio
00:26:55.878  ************************************
00:26:55.878   00:00:26	-- common/autotest_common.sh@1114 -- # append
00:26:55.878   00:00:26	-- dd/posix.sh@16 -- # local dump0
00:26:55.878   00:00:26	-- dd/posix.sh@17 -- # local dump1
00:26:55.878    00:00:26	-- dd/posix.sh@19 -- # gen_bytes 32
00:26:55.878    00:00:26	-- dd/common.sh@98 -- # xtrace_disable
00:26:55.878    00:00:26	-- common/autotest_common.sh@10 -- # set +x
00:26:55.878   00:00:26	-- dd/posix.sh@19 -- # dump0=avbgzph9i04qe1sjwswuvmhwyfbg8qcp
00:26:55.878    00:00:26	-- dd/posix.sh@20 -- # gen_bytes 32
00:26:55.878    00:00:26	-- dd/common.sh@98 -- # xtrace_disable
00:26:55.878    00:00:26	-- common/autotest_common.sh@10 -- # set +x
00:26:55.878   00:00:26	-- dd/posix.sh@20 -- # dump1=s72pss2kgdflag48a9i9lv5bknzrwpx7
00:26:55.878   00:00:26	-- dd/posix.sh@22 -- # printf %s avbgzph9i04qe1sjwswuvmhwyfbg8qcp
00:26:55.878   00:00:26	-- dd/posix.sh@23 -- # printf %s s72pss2kgdflag48a9i9lv5bknzrwpx7
00:26:55.878   00:00:26	-- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:26:55.878  [2024-12-14 00:00:26.324334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:55.878  [2024-12-14 00:00:26.324539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134621 ]
00:26:55.878  [2024-12-14 00:00:26.496977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:56.138  [2024-12-14 00:00:26.700034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:56.397  
[2024-12-14T00:00:28.066Z] Copying: 32/32 [B] (average 31 kBps)
00:26:57.334  
00:26:57.334   00:00:27	-- dd/posix.sh@27 -- # [[ s72pss2kgdflag48a9i9lv5bknzrwpx7avbgzph9i04qe1sjwswuvmhwyfbg8qcp == \s\7\2\p\s\s\2\k\g\d\f\l\a\g\4\8\a\9\i\9\l\v\5\b\k\n\z\r\w\p\x\7\a\v\b\g\z\p\h\9\i\0\4\q\e\1\s\j\w\s\w\u\v\m\h\w\y\f\b\g\8\q\c\p ]]
00:26:57.334  
00:26:57.334  real	0m1.660s
00:26:57.334  user	0m1.272s
00:26:57.334  sys	0m0.257s
00:26:57.334   00:00:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:57.334  ************************************
00:26:57.334  END TEST dd_flag_append_forced_aio
00:26:57.334   00:00:27	-- common/autotest_common.sh@10 -- # set +x
00:26:57.334  ************************************
00:26:57.334   00:00:27	-- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory
00:26:57.334   00:00:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:57.334   00:00:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:57.334   00:00:27	-- common/autotest_common.sh@10 -- # set +x
00:26:57.334  ************************************
00:26:57.334  START TEST dd_flag_directory_forced_aio
00:26:57.334  ************************************
00:26:57.334   00:00:27	-- common/autotest_common.sh@1114 -- # directory
00:26:57.334   00:00:27	-- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:57.334   00:00:27	-- common/autotest_common.sh@650 -- # local es=0
00:26:57.334   00:00:27	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:57.334   00:00:27	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:57.334   00:00:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:57.334    00:00:27	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:57.335   00:00:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:57.335    00:00:27	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:57.335   00:00:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:57.335   00:00:27	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:57.335   00:00:27	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:57.335   00:00:27	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:57.335  [2024-12-14 00:00:28.031203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:57.335  [2024-12-14 00:00:28.031400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134662 ]
00:26:57.594  [2024-12-14 00:00:28.201523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:57.853  [2024-12-14 00:00:28.383410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:58.112  [2024-12-14 00:00:28.638138] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:58.112  [2024-12-14 00:00:28.638476] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:58.112  [2024-12-14 00:00:28.638542] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:58.680  [2024-12-14 00:00:29.223703] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:58.940   00:00:29	-- common/autotest_common.sh@653 -- # es=236
00:26:58.940   00:00:29	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:58.940   00:00:29	-- common/autotest_common.sh@662 -- # es=108
00:26:58.940   00:00:29	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:58.940   00:00:29	-- common/autotest_common.sh@670 -- # es=1
00:26:58.940   00:00:29	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:58.940   00:00:29	-- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:58.940   00:00:29	-- common/autotest_common.sh@650 -- # local es=0
00:26:58.940   00:00:29	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:58.940   00:00:29	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:58.940   00:00:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:58.940    00:00:29	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:58.940   00:00:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:58.940    00:00:29	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:58.940   00:00:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:58.940   00:00:29	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:58.940   00:00:29	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:58.940   00:00:29	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:26:58.940  [2024-12-14 00:00:29.617517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:26:58.940  [2024-12-14 00:00:29.617716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134694 ]
00:26:59.199  [2024-12-14 00:00:29.776726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:59.457  [2024-12-14 00:00:29.933849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:59.457  [2024-12-14 00:00:30.188769] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:59.457  [2024-12-14 00:00:30.189111] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:26:59.457  [2024-12-14 00:00:30.189178] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:00.394  [2024-12-14 00:00:30.786951] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:27:00.394   00:00:31	-- common/autotest_common.sh@653 -- # es=236
00:27:00.394   00:00:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:27:00.394   00:00:31	-- common/autotest_common.sh@662 -- # es=108
00:27:00.394   00:00:31	-- common/autotest_common.sh@663 -- # case "$es" in
00:27:00.394   00:00:31	-- common/autotest_common.sh@670 -- # es=1
00:27:00.394   00:00:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:27:00.394  
00:27:00.394  real	0m3.162s
00:27:00.394  user	0m2.481s
00:27:00.394  sys	0m0.460s
00:27:00.394   00:00:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:00.394   00:00:31	-- common/autotest_common.sh@10 -- # set +x
00:27:00.394  ************************************
00:27:00.394  END TEST dd_flag_directory_forced_aio
00:27:00.394  ************************************
00:27:00.654   00:00:31	-- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow
00:27:00.654   00:00:31	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:00.654   00:00:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:00.654   00:00:31	-- common/autotest_common.sh@10 -- # set +x
00:27:00.654  ************************************
00:27:00.654  START TEST dd_flag_nofollow_forced_aio
00:27:00.654  ************************************
00:27:00.654   00:00:31	-- common/autotest_common.sh@1114 -- # nofollow
00:27:00.654   00:00:31	-- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:27:00.654   00:00:31	-- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:27:00.654   00:00:31	-- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:27:00.654   00:00:31	-- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:27:00.654   00:00:31	-- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:00.654   00:00:31	-- common/autotest_common.sh@650 -- # local es=0
00:27:00.654   00:00:31	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:00.654   00:00:31	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:00.654   00:00:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:00.654    00:00:31	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:00.654   00:00:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:00.654    00:00:31	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:00.654   00:00:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:00.654   00:00:31	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:00.654   00:00:31	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:27:00.654   00:00:31	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:00.654  [2024-12-14 00:00:31.260095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:00.654  [2024-12-14 00:00:31.260309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134740 ]
00:27:00.914  [2024-12-14 00:00:31.429853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:00.914  [2024-12-14 00:00:31.600827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:01.173  [2024-12-14 00:00:31.853889] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:27:01.173  [2024-12-14 00:00:31.854255] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:27:01.173  [2024-12-14 00:00:31.854322] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:01.741  [2024-12-14 00:00:32.441129] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:27:02.310   00:00:32	-- common/autotest_common.sh@653 -- # es=216
00:27:02.310   00:00:32	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:27:02.310   00:00:32	-- common/autotest_common.sh@662 -- # es=88
00:27:02.310   00:00:32	-- common/autotest_common.sh@663 -- # case "$es" in
00:27:02.310   00:00:32	-- common/autotest_common.sh@670 -- # es=1
00:27:02.310   00:00:32	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:27:02.310   00:00:32	-- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:27:02.310   00:00:32	-- common/autotest_common.sh@650 -- # local es=0
00:27:02.310   00:00:32	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:27:02.310   00:00:32	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:02.310   00:00:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:02.310    00:00:32	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:02.310   00:00:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:02.310    00:00:32	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:02.310   00:00:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:27:02.310   00:00:32	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:02.310   00:00:32	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:27:02.310   00:00:32	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:27:02.310  [2024-12-14 00:00:32.839761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:02.310  [2024-12-14 00:00:32.839963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134761 ]
00:27:02.310  [2024-12-14 00:00:33.009705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:02.570  [2024-12-14 00:00:33.189492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:02.829  [2024-12-14 00:00:33.446906] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:27:02.829  [2024-12-14 00:00:33.447153] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:27:02.829  [2024-12-14 00:00:33.447218] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:03.397  [2024-12-14 00:00:34.025628] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:27:03.656   00:00:34	-- common/autotest_common.sh@653 -- # es=216
00:27:03.656   00:00:34	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:27:03.656   00:00:34	-- common/autotest_common.sh@662 -- # es=88
00:27:03.656   00:00:34	-- common/autotest_common.sh@663 -- # case "$es" in
00:27:03.656   00:00:34	-- common/autotest_common.sh@670 -- # es=1
00:27:03.656   00:00:34	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:27:03.656   00:00:34	-- dd/posix.sh@46 -- # gen_bytes 512
00:27:03.656   00:00:34	-- dd/common.sh@98 -- # xtrace_disable
00:27:03.656   00:00:34	-- common/autotest_common.sh@10 -- # set +x
00:27:03.656   00:00:34	-- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:03.915  [2024-12-14 00:00:34.435643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:03.915  [2024-12-14 00:00:34.435857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134783 ]
00:27:03.915  [2024-12-14 00:00:34.600212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:04.174  [2024-12-14 00:00:34.766051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:04.433  
[2024-12-14T00:00:36.102Z] Copying: 512/512 [B] (average 500 kBps)
00:27:05.370  
00:27:05.370   00:00:35	-- dd/posix.sh@49 -- # [[ cueewgbqup62rcki594ia72az4d4s4s6ncni9zanj3ejicilr1cfokpmqdga400qzp3ti0z1nasela86mbsuwssd6z8vy3syxzny5v37p7d0v50fhi87dtymjq7os7980rejm70o75rmlmj72hwlsxc3kmswmglglplwujfewrvdpis2mdmte922um0vouz6km0fkdv1p5brqrlcxdfdohph0k2fqb0j1m7pmphzryp1j1m2khdx26gu1nmpze492js6i2pxtzdg2c5d0nu9uyl6dbcgjmkwivmmdjhyce4egt7qbctmbhxqpew0mpq4yyjztl5ligv5yasu4dd4aglmnf72u2d7wl4znhz845i1olvh56loq3vuov2yq89sd5g4u9qqrvh0z4wrlmduy8sqotbhv5zg6mqzlbyj5nhfo90pallf1n29xlzbyshxbtri5su8gne0j47avtd1dh2cwh3bkutaf2xhr1vs3aeys9bsnwbi6k32wm52dyl2 == \c\u\e\e\w\g\b\q\u\p\6\2\r\c\k\i\5\9\4\i\a\7\2\a\z\4\d\4\s\4\s\6\n\c\n\i\9\z\a\n\j\3\e\j\i\c\i\l\r\1\c\f\o\k\p\m\q\d\g\a\4\0\0\q\z\p\3\t\i\0\z\1\n\a\s\e\l\a\8\6\m\b\s\u\w\s\s\d\6\z\8\v\y\3\s\y\x\z\n\y\5\v\3\7\p\7\d\0\v\5\0\f\h\i\8\7\d\t\y\m\j\q\7\o\s\7\9\8\0\r\e\j\m\7\0\o\7\5\r\m\l\m\j\7\2\h\w\l\s\x\c\3\k\m\s\w\m\g\l\g\l\p\l\w\u\j\f\e\w\r\v\d\p\i\s\2\m\d\m\t\e\9\2\2\u\m\0\v\o\u\z\6\k\m\0\f\k\d\v\1\p\5\b\r\q\r\l\c\x\d\f\d\o\h\p\h\0\k\2\f\q\b\0\j\1\m\7\p\m\p\h\z\r\y\p\1\j\1\m\2\k\h\d\x\2\6\g\u\1\n\m\p\z\e\4\9\2\j\s\6\i\2\p\x\t\z\d\g\2\c\5\d\0\n\u\9\u\y\l\6\d\b\c\g\j\m\k\w\i\v\m\m\d\j\h\y\c\e\4\e\g\t\7\q\b\c\t\m\b\h\x\q\p\e\w\0\m\p\q\4\y\y\j\z\t\l\5\l\i\g\v\5\y\a\s\u\4\d\d\4\a\g\l\m\n\f\7\2\u\2\d\7\w\l\4\z\n\h\z\8\4\5\i\1\o\l\v\h\5\6\l\o\q\3\v\u\o\v\2\y\q\8\9\s\d\5\g\4\u\9\q\q\r\v\h\0\z\4\w\r\l\m\d\u\y\8\s\q\o\t\b\h\v\5\z\g\6\m\q\z\l\b\y\j\5\n\h\f\o\9\0\p\a\l\l\f\1\n\2\9\x\l\z\b\y\s\h\x\b\t\r\i\5\s\u\8\g\n\e\0\j\4\7\a\v\t\d\1\d\h\2\c\w\h\3\b\k\u\t\a\f\2\x\h\r\1\v\s\3\a\e\y\s\9\b\s\n\w\b\i\6\k\3\2\w\m\5\2\d\y\l\2 ]]
00:27:05.370  
00:27:05.370  real	0m4.799s
00:27:05.370  user	0m3.785s
00:27:05.370  sys	0m0.682s
00:27:05.370   00:00:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:05.370   00:00:35	-- common/autotest_common.sh@10 -- # set +x
00:27:05.370  ************************************
00:27:05.370  END TEST dd_flag_nofollow_forced_aio
00:27:05.370  ************************************
00:27:05.370   00:00:36	-- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime
00:27:05.370   00:00:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:05.370   00:00:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:05.370   00:00:36	-- common/autotest_common.sh@10 -- # set +x
00:27:05.370  ************************************
00:27:05.370  START TEST dd_flag_noatime_forced_aio
00:27:05.370  ************************************
00:27:05.370   00:00:36	-- common/autotest_common.sh@1114 -- # noatime
00:27:05.370   00:00:36	-- dd/posix.sh@53 -- # local atime_if
00:27:05.370   00:00:36	-- dd/posix.sh@54 -- # local atime_of
00:27:05.370   00:00:36	-- dd/posix.sh@58 -- # gen_bytes 512
00:27:05.370   00:00:36	-- dd/common.sh@98 -- # xtrace_disable
00:27:05.370   00:00:36	-- common/autotest_common.sh@10 -- # set +x
00:27:05.370    00:00:36	-- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:27:05.370   00:00:36	-- dd/posix.sh@60 -- # atime_if=1734134435
00:27:05.370    00:00:36	-- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:05.370   00:00:36	-- dd/posix.sh@61 -- # atime_of=1734134435
00:27:05.370   00:00:36	-- dd/posix.sh@66 -- # sleep 1
00:27:06.748   00:00:37	-- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:06.748  [2024-12-14 00:00:37.137032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:06.748  [2024-12-14 00:00:37.137244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134846 ]
00:27:06.748  [2024-12-14 00:00:37.309937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:07.007  [2024-12-14 00:00:37.520645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:07.266  
[2024-12-14T00:00:38.935Z] Copying: 512/512 [B] (average 500 kBps)
00:27:08.203  
00:27:08.203    00:00:38	-- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:27:08.203   00:00:38	-- dd/posix.sh@69 -- # (( atime_if == 1734134435 ))
00:27:08.203    00:00:38	-- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:08.203   00:00:38	-- dd/posix.sh@70 -- # (( atime_of == 1734134435 ))
00:27:08.203   00:00:38	-- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:08.203  [2024-12-14 00:00:38.900979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:08.203  [2024-12-14 00:00:38.901203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134873 ]
00:27:08.484  [2024-12-14 00:00:39.071456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:08.769  [2024-12-14 00:00:39.264868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:09.048  
[2024-12-14T00:00:40.729Z] Copying: 512/512 [B] (average 500 kBps)
00:27:09.997  
00:27:09.997    00:00:40	-- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:27:09.997   00:00:40	-- dd/posix.sh@73 -- # (( atime_if < 1734134439 ))
00:27:09.997  
00:27:09.997  real	0m4.543s
00:27:09.997  user	0m2.717s
00:27:09.997  sys	0m0.565s
00:27:09.997   00:00:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:09.997  ************************************
00:27:09.997  END TEST dd_flag_noatime_forced_aio
00:27:09.997   00:00:40	-- common/autotest_common.sh@10 -- # set +x
00:27:09.997  ************************************
00:27:09.997   00:00:40	-- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io
00:27:09.997   00:00:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:09.997   00:00:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:09.997   00:00:40	-- common/autotest_common.sh@10 -- # set +x
00:27:09.997  ************************************
00:27:09.997  START TEST dd_flags_misc_forced_aio
00:27:09.997  ************************************
00:27:09.997   00:00:40	-- common/autotest_common.sh@1114 -- # io
00:27:09.997   00:00:40	-- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:27:09.997   00:00:40	-- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:27:09.997   00:00:40	-- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:27:09.997   00:00:40	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:27:09.997   00:00:40	-- dd/posix.sh@86 -- # gen_bytes 512
00:27:09.997   00:00:40	-- dd/common.sh@98 -- # xtrace_disable
00:27:09.997   00:00:40	-- common/autotest_common.sh@10 -- # set +x
00:27:09.997   00:00:40	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:09.997   00:00:40	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:27:09.997  [2024-12-14 00:00:40.724292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:09.997  [2024-12-14 00:00:40.724479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134916 ]
00:27:10.256  [2024-12-14 00:00:40.891171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:10.514  [2024-12-14 00:00:41.069854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:10.773  
[2024-12-14T00:00:42.441Z] Copying: 512/512 [B] (average 500 kBps)
00:27:11.709  
00:27:11.709   00:00:42	-- dd/posix.sh@93 -- # [[ 5dsy8ryz76b4pu1hgft0crln6emvmb0700rc4346ps1513av3ggr2362fodfothx562cv2ssd1kjy3a8sasv27uwv9zsyzrdt2ef87s4lksbojz3de00n25v2kajgnep5yy9waw3u7mbempnvh7kk7h96vsdy4hr97hcnfkj1r0ow57hab6plcmkea0how8sow3ct3w21auztoo6ye4rmhn5khe6jhdw9t57oftkqvtbh4wtpa9fo9nrf8xk7jqb93wh6kjlhvshm8fdwfgfi5eym3g4eflrrpaf55okterrydssvb3fmxk16zivhggaj6w8drqi3xxjjywp4u90fyd3d4otwka7nopwfaoeds808cfmlc3jhqwuz423fnwmn38ma3sz2cbyz9v57581fqazf3egy1l4n0357f9e88araqc8bwab4dknlfvfw56bcof848ywvs26q8pws5ch7p8forw3vgkwwo7an8e8sb40v2b8touuh062mrvydfzj == \5\d\s\y\8\r\y\z\7\6\b\4\p\u\1\h\g\f\t\0\c\r\l\n\6\e\m\v\m\b\0\7\0\0\r\c\4\3\4\6\p\s\1\5\1\3\a\v\3\g\g\r\2\3\6\2\f\o\d\f\o\t\h\x\5\6\2\c\v\2\s\s\d\1\k\j\y\3\a\8\s\a\s\v\2\7\u\w\v\9\z\s\y\z\r\d\t\2\e\f\8\7\s\4\l\k\s\b\o\j\z\3\d\e\0\0\n\2\5\v\2\k\a\j\g\n\e\p\5\y\y\9\w\a\w\3\u\7\m\b\e\m\p\n\v\h\7\k\k\7\h\9\6\v\s\d\y\4\h\r\9\7\h\c\n\f\k\j\1\r\0\o\w\5\7\h\a\b\6\p\l\c\m\k\e\a\0\h\o\w\8\s\o\w\3\c\t\3\w\2\1\a\u\z\t\o\o\6\y\e\4\r\m\h\n\5\k\h\e\6\j\h\d\w\9\t\5\7\o\f\t\k\q\v\t\b\h\4\w\t\p\a\9\f\o\9\n\r\f\8\x\k\7\j\q\b\9\3\w\h\6\k\j\l\h\v\s\h\m\8\f\d\w\f\g\f\i\5\e\y\m\3\g\4\e\f\l\r\r\p\a\f\5\5\o\k\t\e\r\r\y\d\s\s\v\b\3\f\m\x\k\1\6\z\i\v\h\g\g\a\j\6\w\8\d\r\q\i\3\x\x\j\j\y\w\p\4\u\9\0\f\y\d\3\d\4\o\t\w\k\a\7\n\o\p\w\f\a\o\e\d\s\8\0\8\c\f\m\l\c\3\j\h\q\w\u\z\4\2\3\f\n\w\m\n\3\8\m\a\3\s\z\2\c\b\y\z\9\v\5\7\5\8\1\f\q\a\z\f\3\e\g\y\1\l\4\n\0\3\5\7\f\9\e\8\8\a\r\a\q\c\8\b\w\a\b\4\d\k\n\l\f\v\f\w\5\6\b\c\o\f\8\4\8\y\w\v\s\2\6\q\8\p\w\s\5\c\h\7\p\8\f\o\r\w\3\v\g\k\w\w\o\7\a\n\8\e\8\s\b\4\0\v\2\b\8\t\o\u\u\h\0\6\2\m\r\v\y\d\f\z\j ]]
00:27:11.709   00:00:42	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:11.709   00:00:42	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:27:11.968  [2024-12-14 00:00:42.475253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:11.968  [2024-12-14 00:00:42.475455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134949 ]
00:27:11.968  [2024-12-14 00:00:42.643753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:12.227  [2024-12-14 00:00:42.832414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:12.486  
[2024-12-14T00:00:44.154Z] Copying: 512/512 [B] (average 500 kBps)
00:27:13.422  
00:27:13.681   00:00:44	-- dd/posix.sh@93 -- # [[ 5dsy8ryz76b4pu1hgft0crln6emvmb0700rc4346ps1513av3ggr2362fodfothx562cv2ssd1kjy3a8sasv27uwv9zsyzrdt2ef87s4lksbojz3de00n25v2kajgnep5yy9waw3u7mbempnvh7kk7h96vsdy4hr97hcnfkj1r0ow57hab6plcmkea0how8sow3ct3w21auztoo6ye4rmhn5khe6jhdw9t57oftkqvtbh4wtpa9fo9nrf8xk7jqb93wh6kjlhvshm8fdwfgfi5eym3g4eflrrpaf55okterrydssvb3fmxk16zivhggaj6w8drqi3xxjjywp4u90fyd3d4otwka7nopwfaoeds808cfmlc3jhqwuz423fnwmn38ma3sz2cbyz9v57581fqazf3egy1l4n0357f9e88araqc8bwab4dknlfvfw56bcof848ywvs26q8pws5ch7p8forw3vgkwwo7an8e8sb40v2b8touuh062mrvydfzj == \5\d\s\y\8\r\y\z\7\6\b\4\p\u\1\h\g\f\t\0\c\r\l\n\6\e\m\v\m\b\0\7\0\0\r\c\4\3\4\6\p\s\1\5\1\3\a\v\3\g\g\r\2\3\6\2\f\o\d\f\o\t\h\x\5\6\2\c\v\2\s\s\d\1\k\j\y\3\a\8\s\a\s\v\2\7\u\w\v\9\z\s\y\z\r\d\t\2\e\f\8\7\s\4\l\k\s\b\o\j\z\3\d\e\0\0\n\2\5\v\2\k\a\j\g\n\e\p\5\y\y\9\w\a\w\3\u\7\m\b\e\m\p\n\v\h\7\k\k\7\h\9\6\v\s\d\y\4\h\r\9\7\h\c\n\f\k\j\1\r\0\o\w\5\7\h\a\b\6\p\l\c\m\k\e\a\0\h\o\w\8\s\o\w\3\c\t\3\w\2\1\a\u\z\t\o\o\6\y\e\4\r\m\h\n\5\k\h\e\6\j\h\d\w\9\t\5\7\o\f\t\k\q\v\t\b\h\4\w\t\p\a\9\f\o\9\n\r\f\8\x\k\7\j\q\b\9\3\w\h\6\k\j\l\h\v\s\h\m\8\f\d\w\f\g\f\i\5\e\y\m\3\g\4\e\f\l\r\r\p\a\f\5\5\o\k\t\e\r\r\y\d\s\s\v\b\3\f\m\x\k\1\6\z\i\v\h\g\g\a\j\6\w\8\d\r\q\i\3\x\x\j\j\y\w\p\4\u\9\0\f\y\d\3\d\4\o\t\w\k\a\7\n\o\p\w\f\a\o\e\d\s\8\0\8\c\f\m\l\c\3\j\h\q\w\u\z\4\2\3\f\n\w\m\n\3\8\m\a\3\s\z\2\c\b\y\z\9\v\5\7\5\8\1\f\q\a\z\f\3\e\g\y\1\l\4\n\0\3\5\7\f\9\e\8\8\a\r\a\q\c\8\b\w\a\b\4\d\k\n\l\f\v\f\w\5\6\b\c\o\f\8\4\8\y\w\v\s\2\6\q\8\p\w\s\5\c\h\7\p\8\f\o\r\w\3\v\g\k\w\w\o\7\a\n\8\e\8\s\b\4\0\v\2\b\8\t\o\u\u\h\0\6\2\m\r\v\y\d\f\z\j ]]
00:27:13.681   00:00:44	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:13.681   00:00:44	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:27:13.681  [2024-12-14 00:00:44.208919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:13.681  [2024-12-14 00:00:44.209067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134966 ]
00:27:13.681  [2024-12-14 00:00:44.362729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:13.940  [2024-12-14 00:00:44.544308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:14.198  
[2024-12-14T00:00:45.865Z] Copying: 512/512 [B] (average 125 kBps)
00:27:15.133  
00:27:15.392   00:00:45	-- dd/posix.sh@93 -- # [[ 5dsy8ryz76b4pu1hgft0crln6emvmb0700rc4346ps1513av3ggr2362fodfothx562cv2ssd1kjy3a8sasv27uwv9zsyzrdt2ef87s4lksbojz3de00n25v2kajgnep5yy9waw3u7mbempnvh7kk7h96vsdy4hr97hcnfkj1r0ow57hab6plcmkea0how8sow3ct3w21auztoo6ye4rmhn5khe6jhdw9t57oftkqvtbh4wtpa9fo9nrf8xk7jqb93wh6kjlhvshm8fdwfgfi5eym3g4eflrrpaf55okterrydssvb3fmxk16zivhggaj6w8drqi3xxjjywp4u90fyd3d4otwka7nopwfaoeds808cfmlc3jhqwuz423fnwmn38ma3sz2cbyz9v57581fqazf3egy1l4n0357f9e88araqc8bwab4dknlfvfw56bcof848ywvs26q8pws5ch7p8forw3vgkwwo7an8e8sb40v2b8touuh062mrvydfzj == \5\d\s\y\8\r\y\z\7\6\b\4\p\u\1\h\g\f\t\0\c\r\l\n\6\e\m\v\m\b\0\7\0\0\r\c\4\3\4\6\p\s\1\5\1\3\a\v\3\g\g\r\2\3\6\2\f\o\d\f\o\t\h\x\5\6\2\c\v\2\s\s\d\1\k\j\y\3\a\8\s\a\s\v\2\7\u\w\v\9\z\s\y\z\r\d\t\2\e\f\8\7\s\4\l\k\s\b\o\j\z\3\d\e\0\0\n\2\5\v\2\k\a\j\g\n\e\p\5\y\y\9\w\a\w\3\u\7\m\b\e\m\p\n\v\h\7\k\k\7\h\9\6\v\s\d\y\4\h\r\9\7\h\c\n\f\k\j\1\r\0\o\w\5\7\h\a\b\6\p\l\c\m\k\e\a\0\h\o\w\8\s\o\w\3\c\t\3\w\2\1\a\u\z\t\o\o\6\y\e\4\r\m\h\n\5\k\h\e\6\j\h\d\w\9\t\5\7\o\f\t\k\q\v\t\b\h\4\w\t\p\a\9\f\o\9\n\r\f\8\x\k\7\j\q\b\9\3\w\h\6\k\j\l\h\v\s\h\m\8\f\d\w\f\g\f\i\5\e\y\m\3\g\4\e\f\l\r\r\p\a\f\5\5\o\k\t\e\r\r\y\d\s\s\v\b\3\f\m\x\k\1\6\z\i\v\h\g\g\a\j\6\w\8\d\r\q\i\3\x\x\j\j\y\w\p\4\u\9\0\f\y\d\3\d\4\o\t\w\k\a\7\n\o\p\w\f\a\o\e\d\s\8\0\8\c\f\m\l\c\3\j\h\q\w\u\z\4\2\3\f\n\w\m\n\3\8\m\a\3\s\z\2\c\b\y\z\9\v\5\7\5\8\1\f\q\a\z\f\3\e\g\y\1\l\4\n\0\3\5\7\f\9\e\8\8\a\r\a\q\c\8\b\w\a\b\4\d\k\n\l\f\v\f\w\5\6\b\c\o\f\8\4\8\y\w\v\s\2\6\q\8\p\w\s\5\c\h\7\p\8\f\o\r\w\3\v\g\k\w\w\o\7\a\n\8\e\8\s\b\4\0\v\2\b\8\t\o\u\u\h\0\6\2\m\r\v\y\d\f\z\j ]]
00:27:15.392   00:00:45	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:15.392   00:00:45	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:27:15.392  [2024-12-14 00:00:45.938307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:15.392  [2024-12-14 00:00:45.938959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134994 ]
00:27:15.392  [2024-12-14 00:00:46.104314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:15.651  [2024-12-14 00:00:46.288112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.910  
[2024-12-14T00:00:48.019Z] Copying: 512/512 [B] (average 166 kBps)
00:27:17.287  
00:27:17.287   00:00:47	-- dd/posix.sh@93 -- # [[ 5dsy8ryz76b4pu1hgft0crln6emvmb0700rc4346ps1513av3ggr2362fodfothx562cv2ssd1kjy3a8sasv27uwv9zsyzrdt2ef87s4lksbojz3de00n25v2kajgnep5yy9waw3u7mbempnvh7kk7h96vsdy4hr97hcnfkj1r0ow57hab6plcmkea0how8sow3ct3w21auztoo6ye4rmhn5khe6jhdw9t57oftkqvtbh4wtpa9fo9nrf8xk7jqb93wh6kjlhvshm8fdwfgfi5eym3g4eflrrpaf55okterrydssvb3fmxk16zivhggaj6w8drqi3xxjjywp4u90fyd3d4otwka7nopwfaoeds808cfmlc3jhqwuz423fnwmn38ma3sz2cbyz9v57581fqazf3egy1l4n0357f9e88araqc8bwab4dknlfvfw56bcof848ywvs26q8pws5ch7p8forw3vgkwwo7an8e8sb40v2b8touuh062mrvydfzj == \5\d\s\y\8\r\y\z\7\6\b\4\p\u\1\h\g\f\t\0\c\r\l\n\6\e\m\v\m\b\0\7\0\0\r\c\4\3\4\6\p\s\1\5\1\3\a\v\3\g\g\r\2\3\6\2\f\o\d\f\o\t\h\x\5\6\2\c\v\2\s\s\d\1\k\j\y\3\a\8\s\a\s\v\2\7\u\w\v\9\z\s\y\z\r\d\t\2\e\f\8\7\s\4\l\k\s\b\o\j\z\3\d\e\0\0\n\2\5\v\2\k\a\j\g\n\e\p\5\y\y\9\w\a\w\3\u\7\m\b\e\m\p\n\v\h\7\k\k\7\h\9\6\v\s\d\y\4\h\r\9\7\h\c\n\f\k\j\1\r\0\o\w\5\7\h\a\b\6\p\l\c\m\k\e\a\0\h\o\w\8\s\o\w\3\c\t\3\w\2\1\a\u\z\t\o\o\6\y\e\4\r\m\h\n\5\k\h\e\6\j\h\d\w\9\t\5\7\o\f\t\k\q\v\t\b\h\4\w\t\p\a\9\f\o\9\n\r\f\8\x\k\7\j\q\b\9\3\w\h\6\k\j\l\h\v\s\h\m\8\f\d\w\f\g\f\i\5\e\y\m\3\g\4\e\f\l\r\r\p\a\f\5\5\o\k\t\e\r\r\y\d\s\s\v\b\3\f\m\x\k\1\6\z\i\v\h\g\g\a\j\6\w\8\d\r\q\i\3\x\x\j\j\y\w\p\4\u\9\0\f\y\d\3\d\4\o\t\w\k\a\7\n\o\p\w\f\a\o\e\d\s\8\0\8\c\f\m\l\c\3\j\h\q\w\u\z\4\2\3\f\n\w\m\n\3\8\m\a\3\s\z\2\c\b\y\z\9\v\5\7\5\8\1\f\q\a\z\f\3\e\g\y\1\l\4\n\0\3\5\7\f\9\e\8\8\a\r\a\q\c\8\b\w\a\b\4\d\k\n\l\f\v\f\w\5\6\b\c\o\f\8\4\8\y\w\v\s\2\6\q\8\p\w\s\5\c\h\7\p\8\f\o\r\w\3\v\g\k\w\w\o\7\a\n\8\e\8\s\b\4\0\v\2\b\8\t\o\u\u\h\0\6\2\m\r\v\y\d\f\z\j ]]
00:27:17.287   00:00:47	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:27:17.287   00:00:47	-- dd/posix.sh@86 -- # gen_bytes 512
00:27:17.287   00:00:47	-- dd/common.sh@98 -- # xtrace_disable
00:27:17.287   00:00:47	-- common/autotest_common.sh@10 -- # set +x
00:27:17.287   00:00:47	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:17.287   00:00:47	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:27:17.287  [2024-12-14 00:00:47.701392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:17.287  [2024-12-14 00:00:47.701628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135021 ]
00:27:17.287  [2024-12-14 00:00:47.875205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:17.546  [2024-12-14 00:00:48.085273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:17.805  
[2024-12-14T00:00:49.474Z] Copying: 512/512 [B] (average 500 kBps)
00:27:18.742  
00:27:18.742   00:00:49	-- dd/posix.sh@93 -- # [[ rlu5q8dnf6k1sa4ht02z4aupanitxkj2gnzljwv5nelwfevt6mi5ek9roz8v98qac5fb516w8nun2r47ym8v0kw2nfh7581ufvxo7jxym9xd9zbwyh1i2ulfwpr30z5lk35h9xr4shxns5sj4i273pnykpkoa2rvyg7t3bdglgdu3erif9eni712wtv9ev3sar8t30ypoqtxfn52xgecgi323co5e10mszhvtvkt4bowoww1izpcscow7dfshnfrrz9ad1lbovss6bwbf0jvsqa6932api9b2u565aac620tennheevxc057a1t1vqcn4k7pm8yg5xna3dw1pfizrwfm2a4voafmz3o2fkjpc2atgejxlhutfojxahzo8i2zvx429i17ekt3b82pnp0zp0yxy54y8lqr9j6rawwx048zs5qgz1m7wbbvr75dp2aazs4a543wy0enkw92wrc5x7fxloec625vsbx8bpgluzjdxu7nzmc196uvwek5bbip == \r\l\u\5\q\8\d\n\f\6\k\1\s\a\4\h\t\0\2\z\4\a\u\p\a\n\i\t\x\k\j\2\g\n\z\l\j\w\v\5\n\e\l\w\f\e\v\t\6\m\i\5\e\k\9\r\o\z\8\v\9\8\q\a\c\5\f\b\5\1\6\w\8\n\u\n\2\r\4\7\y\m\8\v\0\k\w\2\n\f\h\7\5\8\1\u\f\v\x\o\7\j\x\y\m\9\x\d\9\z\b\w\y\h\1\i\2\u\l\f\w\p\r\3\0\z\5\l\k\3\5\h\9\x\r\4\s\h\x\n\s\5\s\j\4\i\2\7\3\p\n\y\k\p\k\o\a\2\r\v\y\g\7\t\3\b\d\g\l\g\d\u\3\e\r\i\f\9\e\n\i\7\1\2\w\t\v\9\e\v\3\s\a\r\8\t\3\0\y\p\o\q\t\x\f\n\5\2\x\g\e\c\g\i\3\2\3\c\o\5\e\1\0\m\s\z\h\v\t\v\k\t\4\b\o\w\o\w\w\1\i\z\p\c\s\c\o\w\7\d\f\s\h\n\f\r\r\z\9\a\d\1\l\b\o\v\s\s\6\b\w\b\f\0\j\v\s\q\a\6\9\3\2\a\p\i\9\b\2\u\5\6\5\a\a\c\6\2\0\t\e\n\n\h\e\e\v\x\c\0\5\7\a\1\t\1\v\q\c\n\4\k\7\p\m\8\y\g\5\x\n\a\3\d\w\1\p\f\i\z\r\w\f\m\2\a\4\v\o\a\f\m\z\3\o\2\f\k\j\p\c\2\a\t\g\e\j\x\l\h\u\t\f\o\j\x\a\h\z\o\8\i\2\z\v\x\4\2\9\i\1\7\e\k\t\3\b\8\2\p\n\p\0\z\p\0\y\x\y\5\4\y\8\l\q\r\9\j\6\r\a\w\w\x\0\4\8\z\s\5\q\g\z\1\m\7\w\b\b\v\r\7\5\d\p\2\a\a\z\s\4\a\5\4\3\w\y\0\e\n\k\w\9\2\w\r\c\5\x\7\f\x\l\o\e\c\6\2\5\v\s\b\x\8\b\p\g\l\u\z\j\d\x\u\7\n\z\m\c\1\9\6\u\v\w\e\k\5\b\b\i\p ]]
00:27:18.742   00:00:49	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:18.742   00:00:49	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:27:18.742  [2024-12-14 00:00:49.365402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:18.742  [2024-12-14 00:00:49.366293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135038 ]
00:27:19.001  [2024-12-14 00:00:49.541834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:19.001  [2024-12-14 00:00:49.705702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:19.260  
[2024-12-14T00:00:50.929Z] Copying: 512/512 [B] (average 500 kBps)
00:27:20.197  
00:27:20.197   00:00:50	-- dd/posix.sh@93 -- # [[ rlu5q8dnf6k1sa4ht02z4aupanitxkj2gnzljwv5nelwfevt6mi5ek9roz8v98qac5fb516w8nun2r47ym8v0kw2nfh7581ufvxo7jxym9xd9zbwyh1i2ulfwpr30z5lk35h9xr4shxns5sj4i273pnykpkoa2rvyg7t3bdglgdu3erif9eni712wtv9ev3sar8t30ypoqtxfn52xgecgi323co5e10mszhvtvkt4bowoww1izpcscow7dfshnfrrz9ad1lbovss6bwbf0jvsqa6932api9b2u565aac620tennheevxc057a1t1vqcn4k7pm8yg5xna3dw1pfizrwfm2a4voafmz3o2fkjpc2atgejxlhutfojxahzo8i2zvx429i17ekt3b82pnp0zp0yxy54y8lqr9j6rawwx048zs5qgz1m7wbbvr75dp2aazs4a543wy0enkw92wrc5x7fxloec625vsbx8bpgluzjdxu7nzmc196uvwek5bbip == \r\l\u\5\q\8\d\n\f\6\k\1\s\a\4\h\t\0\2\z\4\a\u\p\a\n\i\t\x\k\j\2\g\n\z\l\j\w\v\5\n\e\l\w\f\e\v\t\6\m\i\5\e\k\9\r\o\z\8\v\9\8\q\a\c\5\f\b\5\1\6\w\8\n\u\n\2\r\4\7\y\m\8\v\0\k\w\2\n\f\h\7\5\8\1\u\f\v\x\o\7\j\x\y\m\9\x\d\9\z\b\w\y\h\1\i\2\u\l\f\w\p\r\3\0\z\5\l\k\3\5\h\9\x\r\4\s\h\x\n\s\5\s\j\4\i\2\7\3\p\n\y\k\p\k\o\a\2\r\v\y\g\7\t\3\b\d\g\l\g\d\u\3\e\r\i\f\9\e\n\i\7\1\2\w\t\v\9\e\v\3\s\a\r\8\t\3\0\y\p\o\q\t\x\f\n\5\2\x\g\e\c\g\i\3\2\3\c\o\5\e\1\0\m\s\z\h\v\t\v\k\t\4\b\o\w\o\w\w\1\i\z\p\c\s\c\o\w\7\d\f\s\h\n\f\r\r\z\9\a\d\1\l\b\o\v\s\s\6\b\w\b\f\0\j\v\s\q\a\6\9\3\2\a\p\i\9\b\2\u\5\6\5\a\a\c\6\2\0\t\e\n\n\h\e\e\v\x\c\0\5\7\a\1\t\1\v\q\c\n\4\k\7\p\m\8\y\g\5\x\n\a\3\d\w\1\p\f\i\z\r\w\f\m\2\a\4\v\o\a\f\m\z\3\o\2\f\k\j\p\c\2\a\t\g\e\j\x\l\h\u\t\f\o\j\x\a\h\z\o\8\i\2\z\v\x\4\2\9\i\1\7\e\k\t\3\b\8\2\p\n\p\0\z\p\0\y\x\y\5\4\y\8\l\q\r\9\j\6\r\a\w\w\x\0\4\8\z\s\5\q\g\z\1\m\7\w\b\b\v\r\7\5\d\p\2\a\a\z\s\4\a\5\4\3\w\y\0\e\n\k\w\9\2\w\r\c\5\x\7\f\x\l\o\e\c\6\2\5\v\s\b\x\8\b\p\g\l\u\z\j\d\x\u\7\n\z\m\c\1\9\6\u\v\w\e\k\5\b\b\i\p ]]
00:27:20.197   00:00:50	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:20.197   00:00:50	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:27:20.456  [2024-12-14 00:00:50.981326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:20.456  [2024-12-14 00:00:50.981561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135066 ]
00:27:20.456  [2024-12-14 00:00:51.153763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:20.715  [2024-12-14 00:00:51.321841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:20.974  
[2024-12-14T00:00:52.644Z] Copying: 512/512 [B] (average 125 kBps)
00:27:21.912  
00:27:21.912   00:00:52	-- dd/posix.sh@93 -- # [[ rlu5q8dnf6k1sa4ht02z4aupanitxkj2gnzljwv5nelwfevt6mi5ek9roz8v98qac5fb516w8nun2r47ym8v0kw2nfh7581ufvxo7jxym9xd9zbwyh1i2ulfwpr30z5lk35h9xr4shxns5sj4i273pnykpkoa2rvyg7t3bdglgdu3erif9eni712wtv9ev3sar8t30ypoqtxfn52xgecgi323co5e10mszhvtvkt4bowoww1izpcscow7dfshnfrrz9ad1lbovss6bwbf0jvsqa6932api9b2u565aac620tennheevxc057a1t1vqcn4k7pm8yg5xna3dw1pfizrwfm2a4voafmz3o2fkjpc2atgejxlhutfojxahzo8i2zvx429i17ekt3b82pnp0zp0yxy54y8lqr9j6rawwx048zs5qgz1m7wbbvr75dp2aazs4a543wy0enkw92wrc5x7fxloec625vsbx8bpgluzjdxu7nzmc196uvwek5bbip == \r\l\u\5\q\8\d\n\f\6\k\1\s\a\4\h\t\0\2\z\4\a\u\p\a\n\i\t\x\k\j\2\g\n\z\l\j\w\v\5\n\e\l\w\f\e\v\t\6\m\i\5\e\k\9\r\o\z\8\v\9\8\q\a\c\5\f\b\5\1\6\w\8\n\u\n\2\r\4\7\y\m\8\v\0\k\w\2\n\f\h\7\5\8\1\u\f\v\x\o\7\j\x\y\m\9\x\d\9\z\b\w\y\h\1\i\2\u\l\f\w\p\r\3\0\z\5\l\k\3\5\h\9\x\r\4\s\h\x\n\s\5\s\j\4\i\2\7\3\p\n\y\k\p\k\o\a\2\r\v\y\g\7\t\3\b\d\g\l\g\d\u\3\e\r\i\f\9\e\n\i\7\1\2\w\t\v\9\e\v\3\s\a\r\8\t\3\0\y\p\o\q\t\x\f\n\5\2\x\g\e\c\g\i\3\2\3\c\o\5\e\1\0\m\s\z\h\v\t\v\k\t\4\b\o\w\o\w\w\1\i\z\p\c\s\c\o\w\7\d\f\s\h\n\f\r\r\z\9\a\d\1\l\b\o\v\s\s\6\b\w\b\f\0\j\v\s\q\a\6\9\3\2\a\p\i\9\b\2\u\5\6\5\a\a\c\6\2\0\t\e\n\n\h\e\e\v\x\c\0\5\7\a\1\t\1\v\q\c\n\4\k\7\p\m\8\y\g\5\x\n\a\3\d\w\1\p\f\i\z\r\w\f\m\2\a\4\v\o\a\f\m\z\3\o\2\f\k\j\p\c\2\a\t\g\e\j\x\l\h\u\t\f\o\j\x\a\h\z\o\8\i\2\z\v\x\4\2\9\i\1\7\e\k\t\3\b\8\2\p\n\p\0\z\p\0\y\x\y\5\4\y\8\l\q\r\9\j\6\r\a\w\w\x\0\4\8\z\s\5\q\g\z\1\m\7\w\b\b\v\r\7\5\d\p\2\a\a\z\s\4\a\5\4\3\w\y\0\e\n\k\w\9\2\w\r\c\5\x\7\f\x\l\o\e\c\6\2\5\v\s\b\x\8\b\p\g\l\u\z\j\d\x\u\7\n\z\m\c\1\9\6\u\v\w\e\k\5\b\b\i\p ]]
00:27:21.912   00:00:52	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:27:21.912   00:00:52	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:27:21.912  [2024-12-14 00:00:52.604356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:21.912  [2024-12-14 00:00:52.605145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135091 ]
00:27:22.171  [2024-12-14 00:00:52.773456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:22.430  [2024-12-14 00:00:52.997737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:22.689  
[2024-12-14T00:00:54.357Z] Copying: 512/512 [B] (average 166 kBps)
00:27:23.625  
00:27:23.625  ************************************
00:27:23.625  END TEST dd_flags_misc_forced_aio
00:27:23.625  ************************************
00:27:23.625   00:00:54	-- dd/posix.sh@93 -- # [[ rlu5q8dnf6k1sa4ht02z4aupanitxkj2gnzljwv5nelwfevt6mi5ek9roz8v98qac5fb516w8nun2r47ym8v0kw2nfh7581ufvxo7jxym9xd9zbwyh1i2ulfwpr30z5lk35h9xr4shxns5sj4i273pnykpkoa2rvyg7t3bdglgdu3erif9eni712wtv9ev3sar8t30ypoqtxfn52xgecgi323co5e10mszhvtvkt4bowoww1izpcscow7dfshnfrrz9ad1lbovss6bwbf0jvsqa6932api9b2u565aac620tennheevxc057a1t1vqcn4k7pm8yg5xna3dw1pfizrwfm2a4voafmz3o2fkjpc2atgejxlhutfojxahzo8i2zvx429i17ekt3b82pnp0zp0yxy54y8lqr9j6rawwx048zs5qgz1m7wbbvr75dp2aazs4a543wy0enkw92wrc5x7fxloec625vsbx8bpgluzjdxu7nzmc196uvwek5bbip == \r\l\u\5\q\8\d\n\f\6\k\1\s\a\4\h\t\0\2\z\4\a\u\p\a\n\i\t\x\k\j\2\g\n\z\l\j\w\v\5\n\e\l\w\f\e\v\t\6\m\i\5\e\k\9\r\o\z\8\v\9\8\q\a\c\5\f\b\5\1\6\w\8\n\u\n\2\r\4\7\y\m\8\v\0\k\w\2\n\f\h\7\5\8\1\u\f\v\x\o\7\j\x\y\m\9\x\d\9\z\b\w\y\h\1\i\2\u\l\f\w\p\r\3\0\z\5\l\k\3\5\h\9\x\r\4\s\h\x\n\s\5\s\j\4\i\2\7\3\p\n\y\k\p\k\o\a\2\r\v\y\g\7\t\3\b\d\g\l\g\d\u\3\e\r\i\f\9\e\n\i\7\1\2\w\t\v\9\e\v\3\s\a\r\8\t\3\0\y\p\o\q\t\x\f\n\5\2\x\g\e\c\g\i\3\2\3\c\o\5\e\1\0\m\s\z\h\v\t\v\k\t\4\b\o\w\o\w\w\1\i\z\p\c\s\c\o\w\7\d\f\s\h\n\f\r\r\z\9\a\d\1\l\b\o\v\s\s\6\b\w\b\f\0\j\v\s\q\a\6\9\3\2\a\p\i\9\b\2\u\5\6\5\a\a\c\6\2\0\t\e\n\n\h\e\e\v\x\c\0\5\7\a\1\t\1\v\q\c\n\4\k\7\p\m\8\y\g\5\x\n\a\3\d\w\1\p\f\i\z\r\w\f\m\2\a\4\v\o\a\f\m\z\3\o\2\f\k\j\p\c\2\a\t\g\e\j\x\l\h\u\t\f\o\j\x\a\h\z\o\8\i\2\z\v\x\4\2\9\i\1\7\e\k\t\3\b\8\2\p\n\p\0\z\p\0\y\x\y\5\4\y\8\l\q\r\9\j\6\r\a\w\w\x\0\4\8\z\s\5\q\g\z\1\m\7\w\b\b\v\r\7\5\d\p\2\a\a\z\s\4\a\5\4\3\w\y\0\e\n\k\w\9\2\w\r\c\5\x\7\f\x\l\o\e\c\6\2\5\v\s\b\x\8\b\p\g\l\u\z\j\d\x\u\7\n\z\m\c\1\9\6\u\v\w\e\k\5\b\b\i\p ]]
00:27:23.625  
00:27:23.625  real	0m13.559s
00:27:23.625  user	0m10.421s
00:27:23.625  sys	0m2.054s
00:27:23.625   00:00:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:23.625   00:00:54	-- common/autotest_common.sh@10 -- # set +x
00:27:23.625   00:00:54	-- dd/posix.sh@1 -- # cleanup
00:27:23.625   00:00:54	-- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:27:23.625   00:00:54	-- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:27:23.625  ************************************
00:27:23.625  END TEST spdk_dd_posix
00:27:23.625  ************************************
00:27:23.625  
00:27:23.625  real	0m56.070s
00:27:23.625  user	0m41.648s
00:27:23.625  sys	0m8.327s
00:27:23.625   00:00:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:23.625   00:00:54	-- common/autotest_common.sh@10 -- # set +x
00:27:23.625   00:00:54	-- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:27:23.625   00:00:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:23.625   00:00:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:23.625   00:00:54	-- common/autotest_common.sh@10 -- # set +x
00:27:23.625  ************************************
00:27:23.625  START TEST spdk_dd_malloc
00:27:23.625  ************************************
00:27:23.625   00:00:54	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:27:23.884  * Looking for test storage...
00:27:23.884  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:27:23.884     00:00:54	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:27:23.884      00:00:54	-- common/autotest_common.sh@1690 -- # lcov --version
00:27:23.884      00:00:54	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:27:23.884     00:00:54	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:27:23.884     00:00:54	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:27:23.884     00:00:54	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:27:23.884     00:00:54	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:27:23.884     00:00:54	-- scripts/common.sh@335 -- # IFS=.-:
00:27:23.884     00:00:54	-- scripts/common.sh@335 -- # read -ra ver1
00:27:23.884     00:00:54	-- scripts/common.sh@336 -- # IFS=.-:
00:27:23.884     00:00:54	-- scripts/common.sh@336 -- # read -ra ver2
00:27:23.884     00:00:54	-- scripts/common.sh@337 -- # local 'op=<'
00:27:23.884     00:00:54	-- scripts/common.sh@339 -- # ver1_l=2
00:27:23.884     00:00:54	-- scripts/common.sh@340 -- # ver2_l=1
00:27:23.884     00:00:54	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:27:23.884     00:00:54	-- scripts/common.sh@343 -- # case "$op" in
00:27:23.884     00:00:54	-- scripts/common.sh@344 -- # : 1
00:27:23.884     00:00:54	-- scripts/common.sh@363 -- # (( v = 0 ))
00:27:23.884     00:00:54	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:23.884      00:00:54	-- scripts/common.sh@364 -- # decimal 1
00:27:23.884      00:00:54	-- scripts/common.sh@352 -- # local d=1
00:27:23.884      00:00:54	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:23.884      00:00:54	-- scripts/common.sh@354 -- # echo 1
00:27:23.884     00:00:54	-- scripts/common.sh@364 -- # ver1[v]=1
00:27:23.884      00:00:54	-- scripts/common.sh@365 -- # decimal 2
00:27:23.884      00:00:54	-- scripts/common.sh@352 -- # local d=2
00:27:23.884      00:00:54	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:23.884      00:00:54	-- scripts/common.sh@354 -- # echo 2
00:27:23.884     00:00:54	-- scripts/common.sh@365 -- # ver2[v]=2
00:27:23.884     00:00:54	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:27:23.884     00:00:54	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:27:23.884     00:00:54	-- scripts/common.sh@367 -- # return 0
00:27:23.884     00:00:54	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:23.884     00:00:54	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:27:23.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:23.884  		--rc genhtml_branch_coverage=1
00:27:23.884  		--rc genhtml_function_coverage=1
00:27:23.884  		--rc genhtml_legend=1
00:27:23.884  		--rc geninfo_all_blocks=1
00:27:23.884  		--rc geninfo_unexecuted_blocks=1
00:27:23.884  		
00:27:23.884  		'
00:27:23.884     00:00:54	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:27:23.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:23.884  		--rc genhtml_branch_coverage=1
00:27:23.884  		--rc genhtml_function_coverage=1
00:27:23.884  		--rc genhtml_legend=1
00:27:23.884  		--rc geninfo_all_blocks=1
00:27:23.884  		--rc geninfo_unexecuted_blocks=1
00:27:23.884  		
00:27:23.884  		'
00:27:23.884     00:00:54	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:27:23.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:23.884  		--rc genhtml_branch_coverage=1
00:27:23.884  		--rc genhtml_function_coverage=1
00:27:23.884  		--rc genhtml_legend=1
00:27:23.884  		--rc geninfo_all_blocks=1
00:27:23.884  		--rc geninfo_unexecuted_blocks=1
00:27:23.884  		
00:27:23.884  		'
00:27:23.884     00:00:54	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:27:23.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:23.884  		--rc genhtml_branch_coverage=1
00:27:23.884  		--rc genhtml_function_coverage=1
00:27:23.884  		--rc genhtml_legend=1
00:27:23.884  		--rc geninfo_all_blocks=1
00:27:23.884  		--rc geninfo_unexecuted_blocks=1
00:27:23.884  		
00:27:23.884  		'
00:27:23.884    00:00:54	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:23.884     00:00:54	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:23.884     00:00:54	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:23.884     00:00:54	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:23.884      00:00:54	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:23.884      00:00:54	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:23.884      00:00:54	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:23.884      00:00:54	-- paths/export.sh@5 -- # export PATH
00:27:23.884      00:00:54	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:23.884   00:00:54	-- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy
00:27:23.884   00:00:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:23.884   00:00:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:23.884   00:00:54	-- common/autotest_common.sh@10 -- # set +x
00:27:23.884  ************************************
00:27:23.884  START TEST dd_malloc_copy
00:27:23.884  ************************************
00:27:23.884   00:00:54	-- common/autotest_common.sh@1114 -- # malloc_copy
00:27:23.884   00:00:54	-- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512
00:27:23.884   00:00:54	-- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512
00:27:23.884   00:00:54	-- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512')
00:27:23.884   00:00:54	-- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0
00:27:23.884   00:00:54	-- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512')
00:27:23.884   00:00:54	-- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1
00:27:23.884   00:00:54	-- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62
00:27:23.884    00:00:54	-- dd/malloc.sh@28 -- # gen_conf
00:27:23.884    00:00:54	-- dd/common.sh@31 -- # xtrace_disable
00:27:23.884    00:00:54	-- common/autotest_common.sh@10 -- # set +x
00:27:23.884  {
00:27:23.884    "subsystems": [
00:27:23.884      {
00:27:23.884        "subsystem": "bdev",
00:27:23.884        "config": [
00:27:23.884          {
00:27:23.884            "params": {
00:27:23.884              "block_size": 512,
00:27:23.884              "num_blocks": 1048576,
00:27:23.884              "name": "malloc0"
00:27:23.884            },
00:27:23.884            "method": "bdev_malloc_create"
00:27:23.884          },
00:27:23.884          {
00:27:23.884            "params": {
00:27:23.884              "block_size": 512,
00:27:23.884              "num_blocks": 1048576,
00:27:23.884              "name": "malloc1"
00:27:23.884            },
00:27:23.884            "method": "bdev_malloc_create"
00:27:23.884          },
00:27:23.884          {
00:27:23.884            "method": "bdev_wait_for_examine"
00:27:23.884          }
00:27:23.884        ]
00:27:23.884      }
00:27:23.884    ]
00:27:23.884  }
00:27:23.884  [2024-12-14 00:00:54.584038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:23.884  [2024-12-14 00:00:54.584471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135197 ]
00:27:24.143  [2024-12-14 00:00:54.755340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:24.402  [2024-12-14 00:00:54.914887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:26.937  
[2024-12-14T00:00:58.265Z] Copying: 222/512 [MB] (222 MBps)
[2024-12-14T00:00:58.524Z] Copying: 445/512 [MB] (222 MBps)
[2024-12-14T00:01:01.813Z] Copying: 512/512 [MB] (average 222 MBps)
00:27:31.081  
00:27:31.081   00:01:01	-- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62
00:27:31.081    00:01:01	-- dd/malloc.sh@33 -- # gen_conf
00:27:31.081    00:01:01	-- dd/common.sh@31 -- # xtrace_disable
00:27:31.081    00:01:01	-- common/autotest_common.sh@10 -- # set +x
00:27:31.081  {
00:27:31.081    "subsystems": [
00:27:31.081      {
00:27:31.081        "subsystem": "bdev",
00:27:31.081        "config": [
00:27:31.081          {
00:27:31.081            "params": {
00:27:31.081              "block_size": 512,
00:27:31.081              "num_blocks": 1048576,
00:27:31.081              "name": "malloc0"
00:27:31.081            },
00:27:31.081            "method": "bdev_malloc_create"
00:27:31.081          },
00:27:31.081          {
00:27:31.081            "params": {
00:27:31.081              "block_size": 512,
00:27:31.081              "num_blocks": 1048576,
00:27:31.081              "name": "malloc1"
00:27:31.081            },
00:27:31.081            "method": "bdev_malloc_create"
00:27:31.081          },
00:27:31.081          {
00:27:31.081            "method": "bdev_wait_for_examine"
00:27:31.081          }
00:27:31.081        ]
00:27:31.081      }
00:27:31.081    ]
00:27:31.081  }
00:27:31.081  [2024-12-14 00:01:01.403031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:31.081  [2024-12-14 00:01:01.403390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135281 ]
00:27:31.081  [2024-12-14 00:01:01.569744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:31.081  [2024-12-14 00:01:01.745600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:33.617  
[2024-12-14T00:01:04.917Z] Copying: 224/512 [MB] (224 MBps)
[2024-12-14T00:01:05.179Z] Copying: 449/512 [MB] (224 MBps)
[2024-12-14T00:01:08.470Z] Copying: 512/512 [MB] (average 224 MBps)
00:27:37.738  
00:27:37.738  ************************************
00:27:37.738  END TEST dd_malloc_copy
00:27:37.738  ************************************
00:27:37.738  
00:27:37.738  real	0m13.633s
00:27:37.738  user	0m12.338s
00:27:37.738  sys	0m1.145s
00:27:37.738   00:01:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:37.738   00:01:08	-- common/autotest_common.sh@10 -- # set +x
00:27:37.738  ************************************
00:27:37.738  END TEST spdk_dd_malloc
00:27:37.738  ************************************
00:27:37.738  
00:27:37.738  real	0m13.871s
00:27:37.738  user	0m12.511s
00:27:37.738  sys	0m1.222s
00:27:37.738   00:01:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:37.738   00:01:08	-- common/autotest_common.sh@10 -- # set +x
00:27:37.738   00:01:08	-- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0
00:27:37.738   00:01:08	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:27:37.738   00:01:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:37.738   00:01:08	-- common/autotest_common.sh@10 -- # set +x
00:27:37.738  ************************************
00:27:37.738  START TEST spdk_dd_bdev_to_bdev
00:27:37.738  ************************************
00:27:37.738   00:01:08	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0
00:27:37.738  * Looking for test storage...
00:27:37.738  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:27:37.738     00:01:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:27:37.738      00:01:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:27:37.738      00:01:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:27:37.738     00:01:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:27:37.738     00:01:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:27:37.738     00:01:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:27:37.738     00:01:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:27:37.738     00:01:08	-- scripts/common.sh@335 -- # IFS=.-:
00:27:37.738     00:01:08	-- scripts/common.sh@335 -- # read -ra ver1
00:27:37.738     00:01:08	-- scripts/common.sh@336 -- # IFS=.-:
00:27:37.738     00:01:08	-- scripts/common.sh@336 -- # read -ra ver2
00:27:37.738     00:01:08	-- scripts/common.sh@337 -- # local 'op=<'
00:27:37.738     00:01:08	-- scripts/common.sh@339 -- # ver1_l=2
00:27:37.738     00:01:08	-- scripts/common.sh@340 -- # ver2_l=1
00:27:37.738     00:01:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:27:37.738     00:01:08	-- scripts/common.sh@343 -- # case "$op" in
00:27:37.738     00:01:08	-- scripts/common.sh@344 -- # : 1
00:27:37.738     00:01:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:27:37.738     00:01:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:37.738      00:01:08	-- scripts/common.sh@364 -- # decimal 1
00:27:37.738      00:01:08	-- scripts/common.sh@352 -- # local d=1
00:27:37.738      00:01:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:37.738      00:01:08	-- scripts/common.sh@354 -- # echo 1
00:27:37.738     00:01:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:27:37.738      00:01:08	-- scripts/common.sh@365 -- # decimal 2
00:27:37.738      00:01:08	-- scripts/common.sh@352 -- # local d=2
00:27:37.738      00:01:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:37.738      00:01:08	-- scripts/common.sh@354 -- # echo 2
00:27:37.738     00:01:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:27:37.738     00:01:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:27:37.738     00:01:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:27:37.738     00:01:08	-- scripts/common.sh@367 -- # return 0
00:27:37.738     00:01:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:37.738     00:01:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:27:37.738  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:37.738  		--rc genhtml_branch_coverage=1
00:27:37.738  		--rc genhtml_function_coverage=1
00:27:37.738  		--rc genhtml_legend=1
00:27:37.738  		--rc geninfo_all_blocks=1
00:27:37.738  		--rc geninfo_unexecuted_blocks=1
00:27:37.738  		
00:27:37.738  		'
00:27:37.738     00:01:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:27:37.738  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:37.738  		--rc genhtml_branch_coverage=1
00:27:37.738  		--rc genhtml_function_coverage=1
00:27:37.738  		--rc genhtml_legend=1
00:27:37.738  		--rc geninfo_all_blocks=1
00:27:37.738  		--rc geninfo_unexecuted_blocks=1
00:27:37.738  		
00:27:37.738  		'
00:27:37.738     00:01:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:27:37.738  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:37.738  		--rc genhtml_branch_coverage=1
00:27:37.738  		--rc genhtml_function_coverage=1
00:27:37.738  		--rc genhtml_legend=1
00:27:37.738  		--rc geninfo_all_blocks=1
00:27:37.738  		--rc geninfo_unexecuted_blocks=1
00:27:37.738  		
00:27:37.738  		'
00:27:37.738     00:01:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:27:37.738  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:37.738  		--rc genhtml_branch_coverage=1
00:27:37.738  		--rc genhtml_function_coverage=1
00:27:37.738  		--rc genhtml_legend=1
00:27:37.738  		--rc geninfo_all_blocks=1
00:27:37.738  		--rc geninfo_unexecuted_blocks=1
00:27:37.738  		
00:27:37.738  		'
00:27:37.738    00:01:08	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:37.738     00:01:08	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:37.738     00:01:08	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:37.738     00:01:08	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:37.738      00:01:08	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:37.738      00:01:08	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:37.738      00:01:08	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:37.738      00:01:08	-- paths/export.sh@5 -- # export PATH
00:27:37.738      00:01:08	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:37.738   00:01:08	-- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@")
00:27:37.738   00:01:08	-- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT
00:27:37.738   00:01:08	-- dd/bdev_to_bdev.sh@49 -- # bs=1048576
00:27:37.738   00:01:08	-- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 ))
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie')
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096')
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0
00:27:37.739   00:01:08	-- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256
00:27:37.997  [2024-12-14 00:01:08.482031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:37.997  [2024-12-14 00:01:08.482484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135439 ]
00:27:37.997  [2024-12-14 00:01:08.654779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:38.256  [2024-12-14 00:01:08.913422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:38.823  
[2024-12-14T00:01:10.491Z] Copying: 256/256 [MB] (average 1254 MBps)
00:27:39.759  
00:27:39.759   00:01:10	-- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:27:39.759   00:01:10	-- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:27:39.759   00:01:10	-- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it'
00:27:39.759   00:01:10	-- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it'
00:27:39.759   00:01:10	-- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:27:39.759   00:01:10	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:27:39.759   00:01:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:39.759   00:01:10	-- common/autotest_common.sh@10 -- # set +x
00:27:39.759  ************************************
00:27:39.759  START TEST dd_inflate_file
00:27:39.759  ************************************
00:27:39.759   00:01:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:27:40.018  [2024-12-14 00:01:10.524505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:40.018  [2024-12-14 00:01:10.524942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135467 ]
00:27:40.018  [2024-12-14 00:01:10.690881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:40.276  [2024-12-14 00:01:10.866256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:40.535  
[2024-12-14T00:01:12.644Z] Copying: 64/64 [MB] (average 1280 MBps)
00:27:41.912  
00:27:41.912  ************************************
00:27:41.912  END TEST dd_inflate_file
00:27:41.912  ************************************
00:27:41.912  
00:27:41.912  real	0m1.778s
00:27:41.912  user	0m1.351s
00:27:41.912  sys	0m0.292s
00:27:41.912   00:01:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:41.912   00:01:12	-- common/autotest_common.sh@10 -- # set +x
00:27:41.912    00:01:12	-- dd/bdev_to_bdev.sh@104 -- # wc -c
00:27:41.912   00:01:12	-- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891
00:27:41.912   00:01:12	-- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:27:41.912    00:01:12	-- dd/bdev_to_bdev.sh@107 -- # gen_conf
00:27:41.912    00:01:12	-- dd/common.sh@31 -- # xtrace_disable
00:27:41.912   00:01:12	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:27:41.912    00:01:12	-- common/autotest_common.sh@10 -- # set +x
00:27:41.912   00:01:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:41.912   00:01:12	-- common/autotest_common.sh@10 -- # set +x
00:27:41.912  ************************************
00:27:41.912  START TEST dd_copy_to_out_bdev
00:27:41.912  ************************************
00:27:41.912   00:01:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:27:41.912  {
00:27:41.912    "subsystems": [
00:27:41.912      {
00:27:41.912        "subsystem": "bdev",
00:27:41.912        "config": [
00:27:41.912          {
00:27:41.912            "params": {
00:27:41.912              "block_size": 4096,
00:27:41.912              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:41.912              "name": "aio1"
00:27:41.912            },
00:27:41.912            "method": "bdev_aio_create"
00:27:41.912          },
00:27:41.912          {
00:27:41.912            "params": {
00:27:41.912              "trtype": "pcie",
00:27:41.912              "traddr": "0000:00:06.0",
00:27:41.912              "name": "Nvme0"
00:27:41.912            },
00:27:41.912            "method": "bdev_nvme_attach_controller"
00:27:41.912          },
00:27:41.912          {
00:27:41.912            "method": "bdev_wait_for_examine"
00:27:41.912          }
00:27:41.912        ]
00:27:41.912      }
00:27:41.912    ]
00:27:41.912  }
00:27:41.912  [2024-12-14 00:01:12.366192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:41.912  [2024-12-14 00:01:12.366567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135526 ]
00:27:41.912  [2024-12-14 00:01:12.534637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:42.170  [2024-12-14 00:01:12.722694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:43.545  
[2024-12-14T00:01:14.535Z] Copying: 45/64 [MB] (45 MBps)
[2024-12-14T00:01:15.911Z] Copying: 64/64 [MB] (average 45 MBps)
00:27:45.179  
00:27:45.179  ************************************
00:27:45.179  END TEST dd_copy_to_out_bdev
00:27:45.179  ************************************
00:27:45.179  
00:27:45.179  real	0m3.260s
00:27:45.179  user	0m2.789s
00:27:45.179  sys	0m0.365s
00:27:45.179   00:01:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:45.179   00:01:15	-- common/autotest_common.sh@10 -- # set +x
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@113 -- # count=65
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic
00:27:45.179   00:01:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:45.179   00:01:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:45.179   00:01:15	-- common/autotest_common.sh@10 -- # set +x
00:27:45.179  ************************************
00:27:45.179  START TEST dd_offset_magic
00:27:45.179  ************************************
00:27:45.179   00:01:15	-- common/autotest_common.sh@1114 -- # offset_magic
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@13 -- # local magic_check
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@14 -- # local offsets offset
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64)
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:27:45.179   00:01:15	-- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62
00:27:45.179    00:01:15	-- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:27:45.179    00:01:15	-- dd/common.sh@31 -- # xtrace_disable
00:27:45.179    00:01:15	-- common/autotest_common.sh@10 -- # set +x
00:27:45.179  {
00:27:45.179    "subsystems": [
00:27:45.179      {
00:27:45.179        "subsystem": "bdev",
00:27:45.179        "config": [
00:27:45.179          {
00:27:45.179            "params": {
00:27:45.179              "block_size": 4096,
00:27:45.179              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:45.179              "name": "aio1"
00:27:45.179            },
00:27:45.179            "method": "bdev_aio_create"
00:27:45.179          },
00:27:45.179          {
00:27:45.179            "params": {
00:27:45.179              "trtype": "pcie",
00:27:45.179              "traddr": "0000:00:06.0",
00:27:45.179              "name": "Nvme0"
00:27:45.179            },
00:27:45.179            "method": "bdev_nvme_attach_controller"
00:27:45.179          },
00:27:45.179          {
00:27:45.179            "method": "bdev_wait_for_examine"
00:27:45.179          }
00:27:45.179        ]
00:27:45.179      }
00:27:45.179    ]
00:27:45.179  }
00:27:45.179  [2024-12-14 00:01:15.688205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:45.179  [2024-12-14 00:01:15.688583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135587 ]
00:27:45.179  [2024-12-14 00:01:15.855178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:45.438  [2024-12-14 00:01:16.033052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:46.374  
[2024-12-14T00:01:18.484Z] Copying: 65/65 [MB] (average 107 MBps)
00:27:47.752  
00:27:47.752   00:01:18	-- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62
00:27:47.752    00:01:18	-- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:27:47.752    00:01:18	-- dd/common.sh@31 -- # xtrace_disable
00:27:47.752    00:01:18	-- common/autotest_common.sh@10 -- # set +x
00:27:47.752  {
00:27:47.752    "subsystems": [
00:27:47.752      {
00:27:47.752        "subsystem": "bdev",
00:27:47.752        "config": [
00:27:47.752          {
00:27:47.752            "params": {
00:27:47.752              "block_size": 4096,
00:27:47.752              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:47.752              "name": "aio1"
00:27:47.752            },
00:27:47.752            "method": "bdev_aio_create"
00:27:47.752          },
00:27:47.752          {
00:27:47.752            "params": {
00:27:47.752              "trtype": "pcie",
00:27:47.752              "traddr": "0000:00:06.0",
00:27:47.752              "name": "Nvme0"
00:27:47.752            },
00:27:47.752            "method": "bdev_nvme_attach_controller"
00:27:47.752          },
00:27:47.752          {
00:27:47.752            "method": "bdev_wait_for_examine"
00:27:47.752          }
00:27:47.752        ]
00:27:47.752      }
00:27:47.752    ]
00:27:47.752  }
00:27:47.752  [2024-12-14 00:01:18.321809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:47.752  [2024-12-14 00:01:18.322191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135636 ]
00:27:48.011  [2024-12-14 00:01:18.487087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:48.011  [2024-12-14 00:01:18.661062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:48.579  
[2024-12-14T00:01:20.249Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:27:49.517  
00:27:49.517   00:01:20	-- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:27:49.517   00:01:20	-- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:27:49.517   00:01:20	-- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:27:49.517   00:01:20	-- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62
00:27:49.517    00:01:20	-- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:27:49.517    00:01:20	-- dd/common.sh@31 -- # xtrace_disable
00:27:49.517    00:01:20	-- common/autotest_common.sh@10 -- # set +x
00:27:49.777  {
00:27:49.777    "subsystems": [
00:27:49.777      {
00:27:49.777        "subsystem": "bdev",
00:27:49.777        "config": [
00:27:49.777          {
00:27:49.777            "params": {
00:27:49.777              "block_size": 4096,
00:27:49.777              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:49.777              "name": "aio1"
00:27:49.777            },
00:27:49.777            "method": "bdev_aio_create"
00:27:49.777          },
00:27:49.777          {
00:27:49.777            "params": {
00:27:49.777              "trtype": "pcie",
00:27:49.777              "traddr": "0000:00:06.0",
00:27:49.777              "name": "Nvme0"
00:27:49.777            },
00:27:49.777            "method": "bdev_nvme_attach_controller"
00:27:49.777          },
00:27:49.777          {
00:27:49.777            "method": "bdev_wait_for_examine"
00:27:49.777          }
00:27:49.777        ]
00:27:49.777      }
00:27:49.777    ]
00:27:49.777  }
00:27:49.777  [2024-12-14 00:01:20.268123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:49.777  [2024-12-14 00:01:20.268502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135666 ]
00:27:49.777  [2024-12-14 00:01:20.435676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:50.036  [2024-12-14 00:01:20.621108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:50.974  
[2024-12-14T00:01:23.085Z] Copying: 65/65 [MB] (average 146 MBps)
00:27:52.353  
00:27:52.353   00:01:22	-- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62
00:27:52.353    00:01:22	-- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:27:52.353    00:01:22	-- dd/common.sh@31 -- # xtrace_disable
00:27:52.353    00:01:22	-- common/autotest_common.sh@10 -- # set +x
00:27:52.353  {
00:27:52.353    "subsystems": [
00:27:52.353      {
00:27:52.353        "subsystem": "bdev",
00:27:52.353        "config": [
00:27:52.353          {
00:27:52.353            "params": {
00:27:52.353              "block_size": 4096,
00:27:52.353              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:52.353              "name": "aio1"
00:27:52.353            },
00:27:52.353            "method": "bdev_aio_create"
00:27:52.353          },
00:27:52.353          {
00:27:52.353            "params": {
00:27:52.353              "trtype": "pcie",
00:27:52.353              "traddr": "0000:00:06.0",
00:27:52.353              "name": "Nvme0"
00:27:52.353            },
00:27:52.353            "method": "bdev_nvme_attach_controller"
00:27:52.353          },
00:27:52.353          {
00:27:52.353            "method": "bdev_wait_for_examine"
00:27:52.353          }
00:27:52.353        ]
00:27:52.353      }
00:27:52.353    ]
00:27:52.353  }
00:27:52.353  [2024-12-14 00:01:22.748144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:52.353  [2024-12-14 00:01:22.748960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135700 ]
00:27:52.353  [2024-12-14 00:01:22.915892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:52.613  [2024-12-14 00:01:23.101755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:52.872  
[2024-12-14T00:01:24.622Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:27:53.890  
00:27:53.890  ************************************
00:27:53.890  END TEST dd_offset_magic
00:27:53.890  ************************************
00:27:53.890   00:01:24	-- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:27:53.890   00:01:24	-- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:27:53.890  
00:27:53.890  real	0m8.930s
00:27:53.890  user	0m6.360s
00:27:53.890  sys	0m1.285s
00:27:53.890   00:01:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:53.890   00:01:24	-- common/autotest_common.sh@10 -- # set +x
00:27:53.890   00:01:24	-- dd/bdev_to_bdev.sh@1 -- # cleanup
00:27:53.890   00:01:24	-- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330
00:27:53.890   00:01:24	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:27:53.890   00:01:24	-- dd/common.sh@11 -- # local nvme_ref=
00:27:53.890   00:01:24	-- dd/common.sh@12 -- # local size=4194330
00:27:53.890   00:01:24	-- dd/common.sh@14 -- # local bs=1048576
00:27:53.890   00:01:24	-- dd/common.sh@15 -- # local count=5
00:27:53.890   00:01:24	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62
00:27:53.890    00:01:24	-- dd/common.sh@18 -- # gen_conf
00:27:53.890    00:01:24	-- dd/common.sh@31 -- # xtrace_disable
00:27:53.890    00:01:24	-- common/autotest_common.sh@10 -- # set +x
00:27:54.149  {
00:27:54.149    "subsystems": [
00:27:54.149      {
00:27:54.149        "subsystem": "bdev",
00:27:54.149        "config": [
00:27:54.149          {
00:27:54.149            "params": {
00:27:54.149              "block_size": 4096,
00:27:54.149              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:54.149              "name": "aio1"
00:27:54.149            },
00:27:54.149            "method": "bdev_aio_create"
00:27:54.149          },
00:27:54.149          {
00:27:54.149            "params": {
00:27:54.149              "trtype": "pcie",
00:27:54.149              "traddr": "0000:00:06.0",
00:27:54.149              "name": "Nvme0"
00:27:54.149            },
00:27:54.149            "method": "bdev_nvme_attach_controller"
00:27:54.149          },
00:27:54.149          {
00:27:54.149            "method": "bdev_wait_for_examine"
00:27:54.149          }
00:27:54.149        ]
00:27:54.149      }
00:27:54.149    ]
00:27:54.149  }
00:27:54.149  [2024-12-14 00:01:24.657825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:54.149  [2024-12-14 00:01:24.658173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135749 ]
00:27:54.149  [2024-12-14 00:01:24.826546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:54.409  [2024-12-14 00:01:25.013645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:54.976  
[2024-12-14T00:01:26.647Z] Copying: 5120/5120 [kB] (average 1250 MBps)
00:27:55.915  
00:27:55.915   00:01:26	-- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330
00:27:55.915   00:01:26	-- dd/common.sh@10 -- # local bdev=aio1
00:27:55.915   00:01:26	-- dd/common.sh@11 -- # local nvme_ref=
00:27:55.915   00:01:26	-- dd/common.sh@12 -- # local size=4194330
00:27:55.915   00:01:26	-- dd/common.sh@14 -- # local bs=1048576
00:27:55.915   00:01:26	-- dd/common.sh@15 -- # local count=5
00:27:55.915   00:01:26	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62
00:27:55.915    00:01:26	-- dd/common.sh@18 -- # gen_conf
00:27:55.915    00:01:26	-- dd/common.sh@31 -- # xtrace_disable
00:27:55.915    00:01:26	-- common/autotest_common.sh@10 -- # set +x
00:27:55.915  [2024-12-14 00:01:26.521022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:55.915  {
00:27:55.915    "subsystems": [
00:27:55.915      {
00:27:55.915        "subsystem": "bdev",
00:27:55.915        "config": [
00:27:55.915          {
00:27:55.915            "params": {
00:27:55.915              "block_size": 4096,
00:27:55.915              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:27:55.915              "name": "aio1"
00:27:55.915            },
00:27:55.915            "method": "bdev_aio_create"
00:27:55.915          },
00:27:55.916          {
00:27:55.916            "params": {
00:27:55.916              "trtype": "pcie",
00:27:55.916              "traddr": "0000:00:06.0",
00:27:55.916              "name": "Nvme0"
00:27:55.916            },
00:27:55.916            "method": "bdev_nvme_attach_controller"
00:27:55.916          },
00:27:55.916          {
00:27:55.916            "method": "bdev_wait_for_examine"
00:27:55.916          }
00:27:55.916        ]
00:27:55.916      }
00:27:55.916    ]
00:27:55.916  }
00:27:55.916  [2024-12-14 00:01:26.521932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135778 ]
00:27:56.177  [2024-12-14 00:01:26.690755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:56.177  [2024-12-14 00:01:26.884884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:56.745  
[2024-12-14T00:01:28.415Z] Copying: 5120/5120 [kB] (average 172 MBps)
00:27:57.683  
00:27:57.941   00:01:28	-- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1
00:27:57.941  ************************************
00:27:57.941  END TEST spdk_dd_bdev_to_bdev
00:27:57.941  ************************************
00:27:57.941  
00:27:57.941  real	0m20.234s
00:27:57.941  user	0m15.153s
00:27:57.941  sys	0m3.118s
00:27:57.941   00:01:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:57.941   00:01:28	-- common/autotest_common.sh@10 -- # set +x
00:27:57.941   00:01:28	-- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 ))
00:27:57.941   00:01:28	-- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:27:57.941   00:01:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:57.941   00:01:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:57.941   00:01:28	-- common/autotest_common.sh@10 -- # set +x
00:27:57.941  ************************************
00:27:57.941  START TEST spdk_dd_sparse
00:27:57.941  ************************************
00:27:57.941   00:01:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:27:57.941  * Looking for test storage...
00:27:57.941  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:27:57.941     00:01:28	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:27:57.941      00:01:28	-- common/autotest_common.sh@1690 -- # lcov --version
00:27:57.941      00:01:28	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:27:58.201     00:01:28	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:27:58.201     00:01:28	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:27:58.201     00:01:28	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:27:58.201     00:01:28	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:27:58.201     00:01:28	-- scripts/common.sh@335 -- # IFS=.-:
00:27:58.201     00:01:28	-- scripts/common.sh@335 -- # read -ra ver1
00:27:58.201     00:01:28	-- scripts/common.sh@336 -- # IFS=.-:
00:27:58.201     00:01:28	-- scripts/common.sh@336 -- # read -ra ver2
00:27:58.201     00:01:28	-- scripts/common.sh@337 -- # local 'op=<'
00:27:58.201     00:01:28	-- scripts/common.sh@339 -- # ver1_l=2
00:27:58.201     00:01:28	-- scripts/common.sh@340 -- # ver2_l=1
00:27:58.201     00:01:28	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:27:58.201     00:01:28	-- scripts/common.sh@343 -- # case "$op" in
00:27:58.201     00:01:28	-- scripts/common.sh@344 -- # : 1
00:27:58.201     00:01:28	-- scripts/common.sh@363 -- # (( v = 0 ))
00:27:58.201     00:01:28	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:58.201      00:01:28	-- scripts/common.sh@364 -- # decimal 1
00:27:58.201      00:01:28	-- scripts/common.sh@352 -- # local d=1
00:27:58.201      00:01:28	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:58.201      00:01:28	-- scripts/common.sh@354 -- # echo 1
00:27:58.201     00:01:28	-- scripts/common.sh@364 -- # ver1[v]=1
00:27:58.201      00:01:28	-- scripts/common.sh@365 -- # decimal 2
00:27:58.201      00:01:28	-- scripts/common.sh@352 -- # local d=2
00:27:58.201      00:01:28	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:58.201      00:01:28	-- scripts/common.sh@354 -- # echo 2
00:27:58.201     00:01:28	-- scripts/common.sh@365 -- # ver2[v]=2
00:27:58.201     00:01:28	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:27:58.201     00:01:28	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:27:58.201     00:01:28	-- scripts/common.sh@367 -- # return 0
00:27:58.201     00:01:28	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:58.201     00:01:28	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:27:58.201  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.201  		--rc genhtml_branch_coverage=1
00:27:58.201  		--rc genhtml_function_coverage=1
00:27:58.201  		--rc genhtml_legend=1
00:27:58.201  		--rc geninfo_all_blocks=1
00:27:58.201  		--rc geninfo_unexecuted_blocks=1
00:27:58.201  		
00:27:58.201  		'
00:27:58.201     00:01:28	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:27:58.201  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.201  		--rc genhtml_branch_coverage=1
00:27:58.201  		--rc genhtml_function_coverage=1
00:27:58.201  		--rc genhtml_legend=1
00:27:58.201  		--rc geninfo_all_blocks=1
00:27:58.201  		--rc geninfo_unexecuted_blocks=1
00:27:58.201  		
00:27:58.201  		'
00:27:58.201     00:01:28	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:27:58.201  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.201  		--rc genhtml_branch_coverage=1
00:27:58.201  		--rc genhtml_function_coverage=1
00:27:58.201  		--rc genhtml_legend=1
00:27:58.201  		--rc geninfo_all_blocks=1
00:27:58.201  		--rc geninfo_unexecuted_blocks=1
00:27:58.201  		
00:27:58.201  		'
00:27:58.201     00:01:28	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:27:58.201  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.201  		--rc genhtml_branch_coverage=1
00:27:58.201  		--rc genhtml_function_coverage=1
00:27:58.201  		--rc genhtml_legend=1
00:27:58.201  		--rc geninfo_all_blocks=1
00:27:58.201  		--rc geninfo_unexecuted_blocks=1
00:27:58.201  		
00:27:58.201  		'
00:27:58.201    00:01:28	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:27:58.201     00:01:28	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:58.201     00:01:28	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:58.201     00:01:28	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:58.201      00:01:28	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:58.201      00:01:28	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:58.201      00:01:28	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:58.201      00:01:28	-- paths/export.sh@5 -- # export PATH
00:27:58.201      00:01:28	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:27:58.201   00:01:28	-- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk
00:27:58.201   00:01:28	-- dd/sparse.sh@109 -- # aio_bdev=dd_aio
00:27:58.201   00:01:28	-- dd/sparse.sh@110 -- # file1=file_zero1
00:27:58.201   00:01:28	-- dd/sparse.sh@111 -- # file2=file_zero2
00:27:58.201   00:01:28	-- dd/sparse.sh@112 -- # file3=file_zero3
00:27:58.201   00:01:28	-- dd/sparse.sh@113 -- # lvstore=dd_lvstore
00:27:58.201   00:01:28	-- dd/sparse.sh@114 -- # lvol=dd_lvol
00:27:58.201   00:01:28	-- dd/sparse.sh@116 -- # trap cleanup EXIT
00:27:58.202   00:01:28	-- dd/sparse.sh@118 -- # prepare
00:27:58.202   00:01:28	-- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600
00:27:58.202   00:01:28	-- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1
00:27:58.202  1+0 records in
00:27:58.202  1+0 records out
00:27:58.202  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00899963 s, 466 MB/s
00:27:58.202   00:01:28	-- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4
00:27:58.202  1+0 records in
00:27:58.202  1+0 records out
00:27:58.202  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0104168 s, 403 MB/s
00:27:58.202   00:01:28	-- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8
00:27:58.202  1+0 records in
00:27:58.202  1+0 records out
00:27:58.202  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00832957 s, 504 MB/s
00:27:58.202   00:01:28	-- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file
00:27:58.202   00:01:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:58.202   00:01:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:58.202   00:01:28	-- common/autotest_common.sh@10 -- # set +x
00:27:58.202  ************************************
00:27:58.202  START TEST dd_sparse_file_to_file
00:27:58.202  ************************************
00:27:58.202   00:01:28	-- common/autotest_common.sh@1114 -- # file_to_file
00:27:58.202   00:01:28	-- dd/sparse.sh@26 -- # local stat1_s stat1_b
00:27:58.202   00:01:28	-- dd/sparse.sh@27 -- # local stat2_s stat2_b
00:27:58.202   00:01:28	-- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:27:58.202   00:01:28	-- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0
00:27:58.202   00:01:28	-- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore')
00:27:58.202   00:01:28	-- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1
00:27:58.202   00:01:28	-- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62
00:27:58.202    00:01:28	-- dd/sparse.sh@41 -- # gen_conf
00:27:58.202    00:01:28	-- dd/common.sh@31 -- # xtrace_disable
00:27:58.202    00:01:28	-- common/autotest_common.sh@10 -- # set +x
00:27:58.202  {
00:27:58.202    "subsystems": [
00:27:58.202      {
00:27:58.202        "subsystem": "bdev",
00:27:58.202        "config": [
00:27:58.202          {
00:27:58.202            "params": {
00:27:58.202              "block_size": 4096,
00:27:58.202              "filename": "dd_sparse_aio_disk",
00:27:58.202              "name": "dd_aio"
00:27:58.202            },
00:27:58.202            "method": "bdev_aio_create"
00:27:58.202          },
00:27:58.202          {
00:27:58.202            "params": {
00:27:58.202              "lvs_name": "dd_lvstore",
00:27:58.202              "bdev_name": "dd_aio"
00:27:58.202            },
00:27:58.202            "method": "bdev_lvol_create_lvstore"
00:27:58.202          },
00:27:58.202          {
00:27:58.202            "method": "bdev_wait_for_examine"
00:27:58.202          }
00:27:58.202        ]
00:27:58.202      }
00:27:58.202    ]
00:27:58.202  }
00:27:58.202  [2024-12-14 00:01:28.843375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:27:58.202  [2024-12-14 00:01:28.844362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135874 ]
00:27:58.461  [2024-12-14 00:01:29.011149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:58.720  [2024-12-14 00:01:29.199558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:58.979  
[2024-12-14T00:01:31.089Z] Copying: 12/36 [MB] (average 1000 MBps)
00:28:00.357  
00:28:00.357    00:01:30	-- dd/sparse.sh@47 -- # stat --printf=%s file_zero1
00:28:00.357   00:01:30	-- dd/sparse.sh@47 -- # stat1_s=37748736
00:28:00.357    00:01:30	-- dd/sparse.sh@48 -- # stat --printf=%s file_zero2
00:28:00.357   00:01:30	-- dd/sparse.sh@48 -- # stat2_s=37748736
00:28:00.357   00:01:30	-- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:28:00.357    00:01:30	-- dd/sparse.sh@52 -- # stat --printf=%b file_zero1
00:28:00.357   00:01:30	-- dd/sparse.sh@52 -- # stat1_b=24576
00:28:00.357    00:01:30	-- dd/sparse.sh@53 -- # stat --printf=%b file_zero2
00:28:00.357   00:01:30	-- dd/sparse.sh@53 -- # stat2_b=24576
00:28:00.357   00:01:30	-- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]]
00:28:00.357  
00:28:00.357  ************************************
00:28:00.357  END TEST dd_sparse_file_to_file
00:28:00.357  ************************************
00:28:00.357  real	0m1.971s
00:28:00.357  user	0m1.501s
00:28:00.357  sys	0m0.320s
00:28:00.357   00:01:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:00.357   00:01:30	-- common/autotest_common.sh@10 -- # set +x
00:28:00.357   00:01:30	-- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev
00:28:00.357   00:01:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:00.357   00:01:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:00.357   00:01:30	-- common/autotest_common.sh@10 -- # set +x
00:28:00.357  ************************************
00:28:00.357  START TEST dd_sparse_file_to_bdev
00:28:00.357  ************************************
00:28:00.357   00:01:30	-- common/autotest_common.sh@1114 -- # file_to_bdev
00:28:00.357   00:01:30	-- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:28:00.357   00:01:30	-- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0
00:28:00.357   00:01:30	-- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true')
00:28:00.357   00:01:30	-- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1
00:28:00.357   00:01:30	-- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62
00:28:00.357    00:01:30	-- dd/sparse.sh@73 -- # gen_conf
00:28:00.357    00:01:30	-- dd/common.sh@31 -- # xtrace_disable
00:28:00.357    00:01:30	-- common/autotest_common.sh@10 -- # set +x
00:28:00.357  {
00:28:00.357    "subsystems": [
00:28:00.357      {
00:28:00.357        "subsystem": "bdev",
00:28:00.357        "config": [
00:28:00.357          {
00:28:00.357            "params": {
00:28:00.357              "block_size": 4096,
00:28:00.357              "filename": "dd_sparse_aio_disk",
00:28:00.357              "name": "dd_aio"
00:28:00.357            },
00:28:00.357            "method": "bdev_aio_create"
00:28:00.357          },
00:28:00.357          {
00:28:00.357            "params": {
00:28:00.357              "lvs_name": "dd_lvstore",
00:28:00.357              "lvol_name": "dd_lvol",
00:28:00.357              "size": 37748736,
00:28:00.357              "thin_provision": true
00:28:00.357            },
00:28:00.357            "method": "bdev_lvol_create"
00:28:00.357          },
00:28:00.357          {
00:28:00.357            "method": "bdev_wait_for_examine"
00:28:00.357          }
00:28:00.357        ]
00:28:00.357      }
00:28:00.357    ]
00:28:00.357  }
00:28:00.357  [2024-12-14 00:01:30.868143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:00.357  [2024-12-14 00:01:30.868495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135940 ]
00:28:00.357  [2024-12-14 00:01:31.034384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:00.616  [2024-12-14 00:01:31.216239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:00.875  [2024-12-14 00:01:31.509708] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09
00:28:00.875  
[2024-12-14T00:01:31.607Z] Copying: 12/36 [MB] (average 521 MBps)[2024-12-14 00:01:31.568767] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times
00:28:02.254  
00:28:02.254  
00:28:02.254  ************************************
00:28:02.254  END TEST dd_sparse_file_to_bdev
00:28:02.254  ************************************
00:28:02.254  
00:28:02.254  real	0m1.921s
00:28:02.254  user	0m1.511s
00:28:02.254  sys	0m0.294s
00:28:02.254   00:01:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:02.254   00:01:32	-- common/autotest_common.sh@10 -- # set +x
00:28:02.254   00:01:32	-- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file
00:28:02.254   00:01:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:02.254   00:01:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:02.254   00:01:32	-- common/autotest_common.sh@10 -- # set +x
00:28:02.254  ************************************
00:28:02.254  START TEST dd_sparse_bdev_to_file
00:28:02.254  ************************************
00:28:02.254   00:01:32	-- common/autotest_common.sh@1114 -- # bdev_to_file
00:28:02.254   00:01:32	-- dd/sparse.sh@81 -- # local stat2_s stat2_b
00:28:02.254   00:01:32	-- dd/sparse.sh@82 -- # local stat3_s stat3_b
00:28:02.254   00:01:32	-- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:28:02.254   00:01:32	-- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0
00:28:02.254   00:01:32	-- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62
00:28:02.254    00:01:32	-- dd/sparse.sh@91 -- # gen_conf
00:28:02.254    00:01:32	-- dd/common.sh@31 -- # xtrace_disable
00:28:02.254    00:01:32	-- common/autotest_common.sh@10 -- # set +x
00:28:02.254  {
00:28:02.254    "subsystems": [
00:28:02.254      {
00:28:02.254        "subsystem": "bdev",
00:28:02.254        "config": [
00:28:02.254          {
00:28:02.254            "params": {
00:28:02.254              "block_size": 4096,
00:28:02.254              "filename": "dd_sparse_aio_disk",
00:28:02.254              "name": "dd_aio"
00:28:02.254            },
00:28:02.254            "method": "bdev_aio_create"
00:28:02.254          },
00:28:02.254          {
00:28:02.254            "method": "bdev_wait_for_examine"
00:28:02.254          }
00:28:02.254        ]
00:28:02.254      }
00:28:02.254    ]
00:28:02.254  }
00:28:02.254  [2024-12-14 00:01:32.842428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:02.254  [2024-12-14 00:01:32.842630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135991 ]
00:28:02.513  [2024-12-14 00:01:33.007208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:02.513  [2024-12-14 00:01:33.193509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:03.081  
[2024-12-14T00:01:34.750Z] Copying: 12/36 [MB] (average 923 MBps)
00:28:04.018  
00:28:04.018    00:01:34	-- dd/sparse.sh@97 -- # stat --printf=%s file_zero2
00:28:04.018   00:01:34	-- dd/sparse.sh@97 -- # stat2_s=37748736
00:28:04.018    00:01:34	-- dd/sparse.sh@98 -- # stat --printf=%s file_zero3
00:28:04.018   00:01:34	-- dd/sparse.sh@98 -- # stat3_s=37748736
00:28:04.018   00:01:34	-- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:28:04.018    00:01:34	-- dd/sparse.sh@102 -- # stat --printf=%b file_zero2
00:28:04.018   00:01:34	-- dd/sparse.sh@102 -- # stat2_b=24576
00:28:04.018    00:01:34	-- dd/sparse.sh@103 -- # stat --printf=%b file_zero3
00:28:04.018   00:01:34	-- dd/sparse.sh@103 -- # stat3_b=24576
00:28:04.018   00:01:34	-- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]]
00:28:04.018  
00:28:04.018  real	0m1.900s
00:28:04.018  user	0m1.523s
00:28:04.018  sys	0m0.281s
00:28:04.018   00:01:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:04.018  ************************************
00:28:04.018  END TEST dd_sparse_bdev_to_file
00:28:04.018  ************************************
00:28:04.018   00:01:34	-- common/autotest_common.sh@10 -- # set +x
00:28:04.018   00:01:34	-- dd/sparse.sh@1 -- # cleanup
00:28:04.018   00:01:34	-- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk
00:28:04.018   00:01:34	-- dd/sparse.sh@12 -- # rm file_zero1
00:28:04.018   00:01:34	-- dd/sparse.sh@13 -- # rm file_zero2
00:28:04.018   00:01:34	-- dd/sparse.sh@14 -- # rm file_zero3
00:28:04.018  ************************************
00:28:04.018  END TEST spdk_dd_sparse
00:28:04.018  ************************************
00:28:04.018  
00:28:04.018  real	0m6.219s
00:28:04.018  user	0m4.732s
00:28:04.018  sys	0m1.115s
00:28:04.018   00:01:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:04.018   00:01:34	-- common/autotest_common.sh@10 -- # set +x
00:28:04.278   00:01:34	-- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:28:04.278   00:01:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:04.278   00:01:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:04.278   00:01:34	-- common/autotest_common.sh@10 -- # set +x
00:28:04.278  ************************************
00:28:04.278  START TEST spdk_dd_negative
00:28:04.278  ************************************
00:28:04.278   00:01:34	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:28:04.278  * Looking for test storage...
00:28:04.278  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:28:04.278     00:01:34	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:04.278      00:01:34	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:04.278      00:01:34	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:04.278     00:01:34	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:04.278     00:01:34	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:04.278     00:01:34	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:04.278     00:01:34	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:04.278     00:01:34	-- scripts/common.sh@335 -- # IFS=.-:
00:28:04.278     00:01:34	-- scripts/common.sh@335 -- # read -ra ver1
00:28:04.278     00:01:34	-- scripts/common.sh@336 -- # IFS=.-:
00:28:04.278     00:01:34	-- scripts/common.sh@336 -- # read -ra ver2
00:28:04.278     00:01:34	-- scripts/common.sh@337 -- # local 'op=<'
00:28:04.278     00:01:34	-- scripts/common.sh@339 -- # ver1_l=2
00:28:04.278     00:01:34	-- scripts/common.sh@340 -- # ver2_l=1
00:28:04.278     00:01:34	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:04.278     00:01:34	-- scripts/common.sh@343 -- # case "$op" in
00:28:04.278     00:01:34	-- scripts/common.sh@344 -- # : 1
00:28:04.278     00:01:34	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:04.278     00:01:34	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:04.278      00:01:34	-- scripts/common.sh@364 -- # decimal 1
00:28:04.278      00:01:34	-- scripts/common.sh@352 -- # local d=1
00:28:04.278      00:01:34	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:04.278      00:01:34	-- scripts/common.sh@354 -- # echo 1
00:28:04.278     00:01:34	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:04.278      00:01:34	-- scripts/common.sh@365 -- # decimal 2
00:28:04.278      00:01:34	-- scripts/common.sh@352 -- # local d=2
00:28:04.278      00:01:34	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:04.278      00:01:34	-- scripts/common.sh@354 -- # echo 2
00:28:04.278     00:01:34	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:04.278     00:01:34	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:04.278     00:01:34	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:04.278     00:01:34	-- scripts/common.sh@367 -- # return 0
00:28:04.278     00:01:34	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:04.278     00:01:34	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:04.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.278  		--rc genhtml_branch_coverage=1
00:28:04.278  		--rc genhtml_function_coverage=1
00:28:04.278  		--rc genhtml_legend=1
00:28:04.278  		--rc geninfo_all_blocks=1
00:28:04.278  		--rc geninfo_unexecuted_blocks=1
00:28:04.278  		
00:28:04.278  		'
00:28:04.278     00:01:34	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:04.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.278  		--rc genhtml_branch_coverage=1
00:28:04.278  		--rc genhtml_function_coverage=1
00:28:04.278  		--rc genhtml_legend=1
00:28:04.278  		--rc geninfo_all_blocks=1
00:28:04.278  		--rc geninfo_unexecuted_blocks=1
00:28:04.278  		
00:28:04.278  		'
00:28:04.278     00:01:34	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:04.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.278  		--rc genhtml_branch_coverage=1
00:28:04.278  		--rc genhtml_function_coverage=1
00:28:04.278  		--rc genhtml_legend=1
00:28:04.278  		--rc geninfo_all_blocks=1
00:28:04.278  		--rc geninfo_unexecuted_blocks=1
00:28:04.278  		
00:28:04.278  		'
00:28:04.278     00:01:34	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:04.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.278  		--rc genhtml_branch_coverage=1
00:28:04.278  		--rc genhtml_function_coverage=1
00:28:04.278  		--rc genhtml_legend=1
00:28:04.278  		--rc geninfo_all_blocks=1
00:28:04.278  		--rc geninfo_unexecuted_blocks=1
00:28:04.278  		
00:28:04.278  		'
00:28:04.278    00:01:34	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:28:04.278     00:01:34	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:04.278     00:01:34	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:04.278     00:01:34	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:04.278      00:01:34	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:04.278      00:01:34	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:04.278      00:01:34	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:04.278      00:01:34	-- paths/export.sh@5 -- # export PATH
00:28:04.278      00:01:34	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:04.278   00:01:34	-- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:28:04.278   00:01:34	-- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:28:04.278   00:01:34	-- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:28:04.278   00:01:34	-- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:28:04.278   00:01:34	-- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments
00:28:04.278   00:01:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:04.278   00:01:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:04.278   00:01:34	-- common/autotest_common.sh@10 -- # set +x
00:28:04.541  ************************************
00:28:04.541  START TEST dd_invalid_arguments
00:28:04.541  ************************************
00:28:04.541   00:01:35	-- common/autotest_common.sh@1114 -- # invalid_arguments
00:28:04.541   00:01:35	-- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:28:04.541   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:04.541   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:28:04.541   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.541   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.541    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.541   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.541    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.541   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.541   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.541   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:04.541   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:28:04.541  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii='
00:28:04.541  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options]
00:28:04.541  options:
00:28:04.541   -c, --config <config>     JSON config file (default none)
00:28:04.541       --json <config>       JSON config file (default none)
00:28:04.541       --json-ignore-init-errors
00:28:04.541                             don't exit on invalid config entry
00:28:04.541   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:28:04.541   -g, --single-file-segments
00:28:04.541                             force creating just one hugetlbfs file
00:28:04.541   -h, --help                show this usage
00:28:04.541   -i, --shm-id <id>         shared memory ID (optional)
00:28:04.541   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:28:04.541       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:28:04.541                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:28:04.541                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:28:04.541                             Within the group, '-' is used for range separator,
00:28:04.541                             ',' is used for single number separator.
00:28:04.541                             '( )' can be omitted for single element group,
00:28:04.541                             '@' can be omitted if cpus and lcores have the same value
00:28:04.541   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:28:04.541   -p, --main-core <id>      main (primary) core for DPDK
00:28:04.541   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:28:04.541   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:28:04.541       --disable-cpumask-locks    Disable CPU core lock files.
00:28:04.541       --silence-noticelog   disable notice level logging to stderr
00:28:04.541       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:28:04.541   -u, --no-pci              disable PCI access
00:28:04.541       --wait-for-rpc        wait for RPCs to initialize subsystems
00:28:04.541       --max-delay <num>     maximum reactor delay (in microseconds)
00:28:04.541   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:28:04.541   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:28:04.541   -R, --huge-unlink         unlink huge files after initialization
00:28:04.541   -v, --version             print SPDK version
00:28:04.541       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:28:04.542       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:28:04.542       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:28:04.542       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:28:04.542                                   Tracepoints vary in size and can use more than one trace entry.
00:28:04.542       --rpcs-allowed	   comma-separated list of permitted RPCS
00:28:04.542       --env-context         Opaque context for use of the env implementation
00:28:04.542       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:28:04.542       --no-huge             run without using hugepages
00:28:04.542   -L, --logflag <flag>    enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd)
00:28:04.542   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:28:04.542                             group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all)
00:28:04.542                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:28:04.542                              Groups and [2024-12-14 00:01:35.087019] spdk_dd.c:1460:main: *ERROR*: Invalid arguments
00:28:04.542  masks can be combined (e.g. thread,bdev:0x1).
00:28:04.542                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:28:04.542       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:28:04.542  [--------- DD Options ---------]
00:28:04.542   --if Input file. Must specify either --if or --ib.
00:28:04.542   --ib Input bdev. Must specifier either --if or --ib
00:28:04.542   --of Output file. Must specify either --of or --ob.
00:28:04.542   --ob Output bdev. Must specify either --of or --ob.
00:28:04.542   --iflag Input file flags.
00:28:04.542   --oflag Output file flags.
00:28:04.542   --bs I/O unit size (default: 4096)
00:28:04.542   --qd Queue depth (default: 2)
00:28:04.542   --count I/O unit count. The number of I/O units to copy. (default: all)
00:28:04.542   --skip Skip this many I/O units at start of input. (default: 0)
00:28:04.542   --seek Skip this many I/O units at start of output. (default: 0)
00:28:04.542   --aio Force usage of AIO. (by default io_uring is used if available)
00:28:04.542   --sparse Enable hole skipping in input target
00:28:04.542   Available iflag and oflag values:
00:28:04.542    append - append mode
00:28:04.542    direct - use direct I/O for data
00:28:04.542    directory - fail unless a directory
00:28:04.542    dsync - use synchronized I/O for data
00:28:04.542    noatime - do not update access time
00:28:04.542    noctty - do not assign controlling terminal from file
00:28:04.542    nofollow - do not follow symlinks
00:28:04.542    nonblock - use non-blocking I/O
00:28:04.542    sync - use synchronized I/O for data and metadata
00:28:04.542   00:01:35	-- common/autotest_common.sh@653 -- # es=2
00:28:04.542   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:04.542   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:04.542   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:04.542  
00:28:04.542  real	0m0.118s
00:28:04.542  user	0m0.050s
00:28:04.542  sys	0m0.065s
00:28:04.542   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:04.542   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.542  ************************************
00:28:04.542  END TEST dd_invalid_arguments
00:28:04.542  ************************************
00:28:04.542   00:01:35	-- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input
00:28:04.542   00:01:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:04.542   00:01:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:04.542   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.542  ************************************
00:28:04.542  START TEST dd_double_input
00:28:04.542  ************************************
00:28:04.542   00:01:35	-- common/autotest_common.sh@1114 -- # double_input
00:28:04.542   00:01:35	-- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:28:04.542   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:04.542   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:28:04.542   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.542   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.542    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.542   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.542    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.542   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.542   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.542   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:04.542   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:28:04.543  [2024-12-14 00:01:35.262773] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both.
00:28:04.803   00:01:35	-- common/autotest_common.sh@653 -- # es=22
00:28:04.803   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:04.803   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:04.803   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:04.803  
00:28:04.803  real	0m0.114s
00:28:04.803  user	0m0.051s
00:28:04.803  sys	0m0.059s
00:28:04.803   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:04.803   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.803  ************************************
00:28:04.803  END TEST dd_double_input
00:28:04.803  ************************************
00:28:04.803   00:01:35	-- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output
00:28:04.803   00:01:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:04.803   00:01:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:04.803   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.803  ************************************
00:28:04.803  START TEST dd_double_output
00:28:04.803  ************************************
00:28:04.803   00:01:35	-- common/autotest_common.sh@1114 -- # double_output
00:28:04.803   00:01:35	-- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:28:04.803   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:04.803   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:28:04.803   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:04.803   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:28:04.803  [2024-12-14 00:01:35.429193] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both.
00:28:04.803   00:01:35	-- common/autotest_common.sh@653 -- # es=22
00:28:04.803   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:04.803   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:04.803   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:04.803  
00:28:04.803  real	0m0.110s
00:28:04.803  user	0m0.051s
00:28:04.803  sys	0m0.057s
00:28:04.803   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:04.803   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.803  ************************************
00:28:04.803  END TEST dd_double_output
00:28:04.803  ************************************
00:28:04.803   00:01:35	-- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input
00:28:04.803   00:01:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:04.803   00:01:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:04.803   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:04.803  ************************************
00:28:04.803  START TEST dd_no_input
00:28:04.803  ************************************
00:28:04.803   00:01:35	-- common/autotest_common.sh@1114 -- # no_input
00:28:04.803   00:01:35	-- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:28:04.803   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:04.803   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:28:04.803   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:04.803   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:04.803   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:04.803   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:28:05.062  [2024-12-14 00:01:35.599497] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib
00:28:05.062   00:01:35	-- common/autotest_common.sh@653 -- # es=22
00:28:05.062   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:05.062   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:05.062   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:05.062  
00:28:05.062  real	0m0.118s
00:28:05.062  user	0m0.058s
00:28:05.062  sys	0m0.058s
00:28:05.062   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:05.062   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:05.062  ************************************
00:28:05.062  END TEST dd_no_input
00:28:05.062  ************************************
00:28:05.062   00:01:35	-- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output
00:28:05.062   00:01:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:05.062   00:01:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:05.062   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:05.062  ************************************
00:28:05.062  START TEST dd_no_output
00:28:05.062  ************************************
00:28:05.062   00:01:35	-- common/autotest_common.sh@1114 -- # no_output
00:28:05.062   00:01:35	-- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:28:05.062   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:05.062   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:28:05.062   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.062   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.062    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.062   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.062    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.062   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.062   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.062   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:05.062   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:28:05.062  [2024-12-14 00:01:35.777856] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob
00:28:05.322   00:01:35	-- common/autotest_common.sh@653 -- # es=22
00:28:05.322   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:05.322   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:05.322   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:05.322  
00:28:05.322  real	0m0.117s
00:28:05.322  user	0m0.078s
00:28:05.322  sys	0m0.037s
00:28:05.322   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:05.322   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:05.322  ************************************
00:28:05.322  END TEST dd_no_output
00:28:05.322  ************************************
00:28:05.322   00:01:35	-- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize
00:28:05.322   00:01:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:05.322   00:01:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:05.322   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:05.322  ************************************
00:28:05.322  START TEST dd_wrong_blocksize
00:28:05.322  ************************************
00:28:05.322   00:01:35	-- common/autotest_common.sh@1114 -- # wrong_blocksize
00:28:05.322   00:01:35	-- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:28:05.322   00:01:35	-- common/autotest_common.sh@650 -- # local es=0
00:28:05.322   00:01:35	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:28:05.322   00:01:35	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.322   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.322    00:01:35	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.322   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.322    00:01:35	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.322   00:01:35	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.322   00:01:35	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.322   00:01:35	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:05.322   00:01:35	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:28:05.322  [2024-12-14 00:01:35.949989] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value
00:28:05.322   00:01:35	-- common/autotest_common.sh@653 -- # es=22
00:28:05.322   00:01:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:05.322   00:01:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:05.322   00:01:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:05.322  
00:28:05.322  real	0m0.114s
00:28:05.322  user	0m0.045s
00:28:05.322  sys	0m0.067s
00:28:05.322   00:01:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:05.322   00:01:35	-- common/autotest_common.sh@10 -- # set +x
00:28:05.322  ************************************
00:28:05.322  END TEST dd_wrong_blocksize
00:28:05.322  ************************************
00:28:05.322   00:01:36	-- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize
00:28:05.322   00:01:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:05.322   00:01:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:05.322   00:01:36	-- common/autotest_common.sh@10 -- # set +x
00:28:05.582  ************************************
00:28:05.582  START TEST dd_smaller_blocksize
00:28:05.582  ************************************
00:28:05.582   00:01:36	-- common/autotest_common.sh@1114 -- # smaller_blocksize
00:28:05.582   00:01:36	-- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:28:05.582   00:01:36	-- common/autotest_common.sh@650 -- # local es=0
00:28:05.582   00:01:36	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:28:05.582   00:01:36	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.582   00:01:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.582    00:01:36	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.582   00:01:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.582    00:01:36	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.582   00:01:36	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:05.582   00:01:36	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:05.582   00:01:36	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:05.582   00:01:36	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:28:05.582  [2024-12-14 00:01:36.132113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:05.582  [2024-12-14 00:01:36.132839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136270 ]
00:28:05.582  [2024-12-14 00:01:36.302893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:05.841  [2024-12-14 00:01:36.550164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:06.409  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:28:06.668  [2024-12-14 00:01:37.165293] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value
00:28:06.668  [2024-12-14 00:01:37.165403] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:07.237  [2024-12-14 00:01:37.806033] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:28:07.496   00:01:38	-- common/autotest_common.sh@653 -- # es=244
00:28:07.496   00:01:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:07.496   00:01:38	-- common/autotest_common.sh@662 -- # es=116
00:28:07.496   00:01:38	-- common/autotest_common.sh@663 -- # case "$es" in
00:28:07.496   00:01:38	-- common/autotest_common.sh@670 -- # es=1
00:28:07.496   00:01:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:07.496  
00:28:07.496  real	0m2.118s
00:28:07.496  user	0m1.456s
00:28:07.496  sys	0m0.558s
00:28:07.496   00:01:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:07.496  ************************************
00:28:07.496   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:07.496  END TEST dd_smaller_blocksize
00:28:07.496  ************************************
00:28:07.496   00:01:38	-- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count
00:28:07.496   00:01:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:07.496   00:01:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:07.496   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:07.755  ************************************
00:28:07.755  START TEST dd_invalid_count
00:28:07.755  ************************************
00:28:07.755   00:01:38	-- common/autotest_common.sh@1114 -- # invalid_count
00:28:07.755   00:01:38	-- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:28:07.755   00:01:38	-- common/autotest_common.sh@650 -- # local es=0
00:28:07.755   00:01:38	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:28:07.755   00:01:38	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.755   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.755    00:01:38	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.755   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.755    00:01:38	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.755   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.755   00:01:38	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.755   00:01:38	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:07.755   00:01:38	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:28:07.755  [2024-12-14 00:01:38.295847] spdk_dd.c:1497:main: *ERROR*: Invalid --count value
00:28:07.755   00:01:38	-- common/autotest_common.sh@653 -- # es=22
00:28:07.756   00:01:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:07.756   00:01:38	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:07.756   00:01:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:07.756  
00:28:07.756  real	0m0.111s
00:28:07.756  user	0m0.061s
00:28:07.756  sys	0m0.050s
00:28:07.756   00:01:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:07.756   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:07.756  ************************************
00:28:07.756  END TEST dd_invalid_count
00:28:07.756  ************************************
00:28:07.756   00:01:38	-- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag
00:28:07.756   00:01:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:07.756   00:01:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:07.756   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:07.756  ************************************
00:28:07.756  START TEST dd_invalid_oflag
00:28:07.756  ************************************
00:28:07.756   00:01:38	-- common/autotest_common.sh@1114 -- # invalid_oflag
00:28:07.756   00:01:38	-- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:28:07.756   00:01:38	-- common/autotest_common.sh@650 -- # local es=0
00:28:07.756   00:01:38	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:28:07.756   00:01:38	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.756   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.756    00:01:38	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.756   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.756    00:01:38	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.756   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:07.756   00:01:38	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:07.756   00:01:38	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:07.756   00:01:38	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:28:07.756  [2024-12-14 00:01:38.458562] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of
00:28:08.015   00:01:38	-- common/autotest_common.sh@653 -- # es=22
00:28:08.015   00:01:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:08.015   00:01:38	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:08.015   00:01:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:08.015  
00:28:08.015  real	0m0.109s
00:28:08.015  user	0m0.042s
00:28:08.015  sys	0m0.068s
00:28:08.015   00:01:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:08.015   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:08.015  ************************************
00:28:08.015  END TEST dd_invalid_oflag
00:28:08.015  ************************************
00:28:08.015   00:01:38	-- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag
00:28:08.015   00:01:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:08.015   00:01:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:08.015   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:08.015  ************************************
00:28:08.015  START TEST dd_invalid_iflag
00:28:08.015  ************************************
00:28:08.015   00:01:38	-- common/autotest_common.sh@1114 -- # invalid_iflag
00:28:08.015   00:01:38	-- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:28:08.015   00:01:38	-- common/autotest_common.sh@650 -- # local es=0
00:28:08.015   00:01:38	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:28:08.015   00:01:38	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015    00:01:38	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015    00:01:38	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015   00:01:38	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:08.015   00:01:38	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:28:08.015  [2024-12-14 00:01:38.614112] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if
00:28:08.015   00:01:38	-- common/autotest_common.sh@653 -- # es=22
00:28:08.015   00:01:38	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:08.015   00:01:38	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:28:08.015   00:01:38	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:08.015  
00:28:08.015  real	0m0.098s
00:28:08.015  user	0m0.069s
00:28:08.015  sys	0m0.029s
00:28:08.015   00:01:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:08.015   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:08.015  ************************************
00:28:08.015  END TEST dd_invalid_iflag
00:28:08.015  ************************************
00:28:08.015   00:01:38	-- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag
00:28:08.015   00:01:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:08.015   00:01:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:08.015   00:01:38	-- common/autotest_common.sh@10 -- # set +x
00:28:08.015  ************************************
00:28:08.015  START TEST dd_unknown_flag
00:28:08.015  ************************************
00:28:08.015   00:01:38	-- common/autotest_common.sh@1114 -- # unknown_flag
00:28:08.015   00:01:38	-- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:28:08.015   00:01:38	-- common/autotest_common.sh@650 -- # local es=0
00:28:08.015   00:01:38	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:28:08.015   00:01:38	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015    00:01:38	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015    00:01:38	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:08.015   00:01:38	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:08.015   00:01:38	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:08.015   00:01:38	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:28:08.274  [2024-12-14 00:01:38.777371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:08.274  [2024-12-14 00:01:38.777557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136408 ]
00:28:08.274  [2024-12-14 00:01:38.943602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:08.533  [2024-12-14 00:01:39.132971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:08.792  [2024-12-14 00:01:39.414322] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1
00:28:08.792  [2024-12-14 00:01:39.414430] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory
00:28:08.792  [2024-12-14 00:01:39.414455] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory
00:28:08.792  [2024-12-14 00:01:39.414513] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:09.360  [2024-12-14 00:01:40.044042] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:28:09.928   00:01:40	-- common/autotest_common.sh@653 -- # es=236
00:28:09.928   00:01:40	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:09.928   00:01:40	-- common/autotest_common.sh@662 -- # es=108
00:28:09.928   00:01:40	-- common/autotest_common.sh@663 -- # case "$es" in
00:28:09.928   00:01:40	-- common/autotest_common.sh@670 -- # es=1
00:28:09.928   00:01:40	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:09.928  
00:28:09.928  real	0m1.706s
00:28:09.928  user	0m1.346s
00:28:09.928  sys	0m0.261s
00:28:09.928   00:01:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:09.928   00:01:40	-- common/autotest_common.sh@10 -- # set +x
00:28:09.928  ************************************
00:28:09.928  END TEST dd_unknown_flag
00:28:09.928  ************************************
00:28:09.928   00:01:40	-- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json
00:28:09.928   00:01:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:09.928   00:01:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:09.928   00:01:40	-- common/autotest_common.sh@10 -- # set +x
00:28:09.928  ************************************
00:28:09.928  START TEST dd_invalid_json
00:28:09.928  ************************************
00:28:09.928   00:01:40	-- common/autotest_common.sh@1114 -- # invalid_json
00:28:09.928   00:01:40	-- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:28:09.928   00:01:40	-- common/autotest_common.sh@650 -- # local es=0
00:28:09.928    00:01:40	-- dd/negative_dd.sh@95 -- # :
00:28:09.928   00:01:40	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:28:09.928   00:01:40	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:09.928   00:01:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:09.928    00:01:40	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:09.928   00:01:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:09.928    00:01:40	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:09.928   00:01:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:28:09.928   00:01:40	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:28:09.928   00:01:40	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:28:09.928   00:01:40	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:28:09.928  [2024-12-14 00:01:40.533154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:09.928  [2024-12-14 00:01:40.533314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136456 ]
00:28:10.187  [2024-12-14 00:01:40.686686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:10.187  [2024-12-14 00:01:40.870404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:10.187  [2024-12-14 00:01:40.870638] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2)
00:28:10.187  [2024-12-14 00:01:40.870683] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:10.187  [2024-12-14 00:01:40.870759] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:28:10.755   00:01:41	-- common/autotest_common.sh@653 -- # es=234
00:28:10.755   00:01:41	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:28:10.755   00:01:41	-- common/autotest_common.sh@662 -- # es=106
00:28:10.755   00:01:41	-- common/autotest_common.sh@663 -- # case "$es" in
00:28:10.755   00:01:41	-- common/autotest_common.sh@670 -- # es=1
00:28:10.755   00:01:41	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:28:10.755  
00:28:10.755  real	0m0.737s
00:28:10.755  user	0m0.475s
00:28:10.755  sys	0m0.164s
00:28:10.755   00:01:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:10.755   00:01:41	-- common/autotest_common.sh@10 -- # set +x
00:28:10.755  ************************************
00:28:10.755  END TEST dd_invalid_json
00:28:10.755  ************************************
00:28:10.755  
00:28:10.755  real	0m6.446s
00:28:10.755  user	0m4.308s
00:28:10.755  sys	0m1.767s
00:28:10.755   00:01:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:10.755   00:01:41	-- common/autotest_common.sh@10 -- # set +x
00:28:10.755  ************************************
00:28:10.755  END TEST spdk_dd_negative
00:28:10.755  ************************************
00:28:10.755  ************************************
00:28:10.755  END TEST spdk_dd
00:28:10.755  ************************************
00:28:10.755  
00:28:10.755  real	2m25.920s
00:28:10.755  user	1m52.360s
00:28:10.755  sys	0m22.991s
00:28:10.755   00:01:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:10.755   00:01:41	-- common/autotest_common.sh@10 -- # set +x
00:28:10.755   00:01:41	-- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']'
00:28:10.755   00:01:41	-- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:28:10.755   00:01:41	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:10.755   00:01:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:10.755   00:01:41	-- common/autotest_common.sh@10 -- # set +x
00:28:10.755  ************************************
00:28:10.755  START TEST blockdev_nvme
00:28:10.755  ************************************
00:28:10.755   00:01:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:28:10.755  * Looking for test storage...
00:28:10.755  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:28:10.755    00:01:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:10.755     00:01:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:10.755     00:01:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:11.014    00:01:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:11.014    00:01:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:11.014    00:01:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:11.014    00:01:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:11.014    00:01:41	-- scripts/common.sh@335 -- # IFS=.-:
00:28:11.014    00:01:41	-- scripts/common.sh@335 -- # read -ra ver1
00:28:11.014    00:01:41	-- scripts/common.sh@336 -- # IFS=.-:
00:28:11.014    00:01:41	-- scripts/common.sh@336 -- # read -ra ver2
00:28:11.014    00:01:41	-- scripts/common.sh@337 -- # local 'op=<'
00:28:11.014    00:01:41	-- scripts/common.sh@339 -- # ver1_l=2
00:28:11.014    00:01:41	-- scripts/common.sh@340 -- # ver2_l=1
00:28:11.014    00:01:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:11.014    00:01:41	-- scripts/common.sh@343 -- # case "$op" in
00:28:11.014    00:01:41	-- scripts/common.sh@344 -- # : 1
00:28:11.014    00:01:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:11.014    00:01:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:11.014     00:01:41	-- scripts/common.sh@364 -- # decimal 1
00:28:11.014     00:01:41	-- scripts/common.sh@352 -- # local d=1
00:28:11.014     00:01:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:11.014     00:01:41	-- scripts/common.sh@354 -- # echo 1
00:28:11.014    00:01:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:11.014     00:01:41	-- scripts/common.sh@365 -- # decimal 2
00:28:11.014     00:01:41	-- scripts/common.sh@352 -- # local d=2
00:28:11.014     00:01:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:11.014     00:01:41	-- scripts/common.sh@354 -- # echo 2
00:28:11.014    00:01:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:11.014    00:01:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:11.014    00:01:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:11.014    00:01:41	-- scripts/common.sh@367 -- # return 0
00:28:11.014    00:01:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:11.014    00:01:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:11.014  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:11.014  		--rc genhtml_branch_coverage=1
00:28:11.014  		--rc genhtml_function_coverage=1
00:28:11.014  		--rc genhtml_legend=1
00:28:11.014  		--rc geninfo_all_blocks=1
00:28:11.014  		--rc geninfo_unexecuted_blocks=1
00:28:11.014  		
00:28:11.014  		'
00:28:11.014    00:01:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:11.014  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:11.014  		--rc genhtml_branch_coverage=1
00:28:11.014  		--rc genhtml_function_coverage=1
00:28:11.014  		--rc genhtml_legend=1
00:28:11.014  		--rc geninfo_all_blocks=1
00:28:11.014  		--rc geninfo_unexecuted_blocks=1
00:28:11.014  		
00:28:11.014  		'
00:28:11.014    00:01:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:11.014  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:11.014  		--rc genhtml_branch_coverage=1
00:28:11.014  		--rc genhtml_function_coverage=1
00:28:11.014  		--rc genhtml_legend=1
00:28:11.015  		--rc geninfo_all_blocks=1
00:28:11.015  		--rc geninfo_unexecuted_blocks=1
00:28:11.015  		
00:28:11.015  		'
00:28:11.015    00:01:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:11.015  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:11.015  		--rc genhtml_branch_coverage=1
00:28:11.015  		--rc genhtml_function_coverage=1
00:28:11.015  		--rc genhtml_legend=1
00:28:11.015  		--rc geninfo_all_blocks=1
00:28:11.015  		--rc geninfo_unexecuted_blocks=1
00:28:11.015  		
00:28:11.015  		'
00:28:11.015   00:01:41	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:28:11.015    00:01:41	-- bdev/nbd_common.sh@6 -- # set -e
00:28:11.015   00:01:41	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:28:11.015   00:01:41	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:11.015   00:01:41	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:28:11.015   00:01:41	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:28:11.015   00:01:41	-- bdev/blockdev.sh@18 -- # :
00:28:11.015   00:01:41	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:28:11.015   00:01:41	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:28:11.015   00:01:41	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:28:11.015    00:01:41	-- bdev/blockdev.sh@672 -- # uname -s
00:28:11.015   00:01:41	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:28:11.015   00:01:41	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:28:11.015   00:01:41	-- bdev/blockdev.sh@680 -- # test_type=nvme
00:28:11.015   00:01:41	-- bdev/blockdev.sh@681 -- # crypto_device=
00:28:11.015   00:01:41	-- bdev/blockdev.sh@682 -- # dek=
00:28:11.015   00:01:41	-- bdev/blockdev.sh@683 -- # env_ctx=
00:28:11.015   00:01:41	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:28:11.015   00:01:41	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:28:11.015   00:01:41	-- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]]
00:28:11.015   00:01:41	-- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]]
00:28:11.015   00:01:41	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:28:11.015   00:01:41	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=136558
00:28:11.015   00:01:41	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:28:11.015   00:01:41	-- bdev/blockdev.sh@47 -- # waitforlisten 136558
00:28:11.015   00:01:41	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:28:11.015   00:01:41	-- common/autotest_common.sh@829 -- # '[' -z 136558 ']'
00:28:11.015   00:01:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:11.015   00:01:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:11.015   00:01:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:11.015  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:11.015   00:01:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:11.015   00:01:41	-- common/autotest_common.sh@10 -- # set +x
00:28:11.015  [2024-12-14 00:01:41.599971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:11.015  [2024-12-14 00:01:41.600178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136558 ]
00:28:11.274  [2024-12-14 00:01:41.766723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:11.274  [2024-12-14 00:01:41.944165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:11.274  [2024-12-14 00:01:41.944410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:12.653   00:01:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:12.653   00:01:43	-- common/autotest_common.sh@862 -- # return 0
00:28:12.653   00:01:43	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:28:12.653   00:01:43	-- bdev/blockdev.sh@697 -- # setup_nvme_conf
00:28:12.653   00:01:43	-- bdev/blockdev.sh@79 -- # local json
00:28:12.653   00:01:43	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:28:12.653    00:01:43	-- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:28:12.653   00:01:43	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\'''
00:28:12.653   00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.653   00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.653   00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.653   00:01:43	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:28:12.653   00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.653   00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.913   00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.913   00:01:43	-- bdev/blockdev.sh@738 -- # cat
00:28:12.913    00:01:43	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:28:12.913    00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.913    00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.913    00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.913    00:01:43	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:28:12.913    00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.913    00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.913    00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.913    00:01:43	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:28:12.913    00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.913    00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.913    00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.913   00:01:43	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:28:12.913    00:01:43	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:28:12.913    00:01:43	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:28:12.913    00:01:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:12.913    00:01:43	-- common/autotest_common.sh@10 -- # set +x
00:28:12.913    00:01:43	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:12.913   00:01:43	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:28:12.913    00:01:43	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "844640fa-12e6-413e-8488-f26eae9fff9b"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "844640fa-12e6-413e-8488-f26eae9fff9b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": true,' '    "nvme_io": true' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:06.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:06.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:28:12.913    00:01:43	-- bdev/blockdev.sh@747 -- # jq -r .name
00:28:12.913   00:01:43	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:28:12.913   00:01:43	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1
00:28:12.913   00:01:43	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:28:12.913   00:01:43	-- bdev/blockdev.sh@752 -- # killprocess 136558
00:28:12.913   00:01:43	-- common/autotest_common.sh@936 -- # '[' -z 136558 ']'
00:28:12.913   00:01:43	-- common/autotest_common.sh@940 -- # kill -0 136558
00:28:12.913    00:01:43	-- common/autotest_common.sh@941 -- # uname
00:28:12.913   00:01:43	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:12.913    00:01:43	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136558
00:28:12.913   00:01:43	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:12.913   00:01:43	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:12.913   00:01:43	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 136558'
00:28:12.913  killing process with pid 136558
00:28:12.913   00:01:43	-- common/autotest_common.sh@955 -- # kill 136558
00:28:12.913   00:01:43	-- common/autotest_common.sh@960 -- # wait 136558
00:28:14.851   00:01:45	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:14.851   00:01:45	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:28:14.851   00:01:45	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:28:14.851   00:01:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:14.851   00:01:45	-- common/autotest_common.sh@10 -- # set +x
00:28:14.851  ************************************
00:28:14.851  START TEST bdev_hello_world
00:28:14.851  ************************************
00:28:14.851   00:01:45	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:28:14.851  [2024-12-14 00:01:45.553309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:14.851  [2024-12-14 00:01:45.553534] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136649 ]
00:28:15.110  [2024-12-14 00:01:45.719254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:15.368  [2024-12-14 00:01:45.909372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:15.627  [2024-12-14 00:01:46.319488] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:28:15.627  [2024-12-14 00:01:46.319576] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:28:15.627  [2024-12-14 00:01:46.319609] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:28:15.627  [2024-12-14 00:01:46.322242] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:28:15.627  [2024-12-14 00:01:46.322792] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:28:15.627  [2024-12-14 00:01:46.322838] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:28:15.627  [2024-12-14 00:01:46.323113] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:28:15.627  
00:28:15.627  [2024-12-14 00:01:46.323153] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:28:17.005  
00:28:17.005  real	0m1.837s
00:28:17.005  user	0m1.481s
00:28:17.005  sys	0m0.256s
00:28:17.005   00:01:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:17.005   00:01:47	-- common/autotest_common.sh@10 -- # set +x
00:28:17.005  ************************************
00:28:17.005  END TEST bdev_hello_world
00:28:17.005  ************************************
00:28:17.005   00:01:47	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:28:17.005   00:01:47	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:17.005   00:01:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:17.005   00:01:47	-- common/autotest_common.sh@10 -- # set +x
00:28:17.005  ************************************
00:28:17.005  START TEST bdev_bounds
00:28:17.005  ************************************
00:28:17.005   00:01:47	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:28:17.005   00:01:47	-- bdev/blockdev.sh@288 -- # bdevio_pid=136693
00:28:17.005   00:01:47	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:28:17.005  Process bdevio pid: 136693
00:28:17.005   00:01:47	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 136693'
00:28:17.005   00:01:47	-- bdev/blockdev.sh@291 -- # waitforlisten 136693
00:28:17.005   00:01:47	-- common/autotest_common.sh@829 -- # '[' -z 136693 ']'
00:28:17.005   00:01:47	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:28:17.005   00:01:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:17.005   00:01:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:17.005  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:17.005   00:01:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:17.005   00:01:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:17.005   00:01:47	-- common/autotest_common.sh@10 -- # set +x
00:28:17.005  [2024-12-14 00:01:47.442909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:17.005  [2024-12-14 00:01:47.443140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136693 ]
00:28:17.005  [2024-12-14 00:01:47.619711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:28:17.264  [2024-12-14 00:01:47.802166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:17.264  [2024-12-14 00:01:47.802320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:28:17.264  [2024-12-14 00:01:47.802322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:17.833   00:01:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:17.833   00:01:48	-- common/autotest_common.sh@862 -- # return 0
00:28:17.833   00:01:48	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:28:17.833  I/O targets:
00:28:17.833    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:28:17.833  
00:28:17.833  
00:28:17.833       CUnit - A unit testing framework for C - Version 2.1-3
00:28:17.833       http://cunit.sourceforge.net/
00:28:17.833  
00:28:17.833  
00:28:17.833  Suite: bdevio tests on: Nvme0n1
00:28:17.833    Test: blockdev write read block ...passed
00:28:17.833    Test: blockdev write zeroes read block ...passed
00:28:17.833    Test: blockdev write zeroes read no split ...passed
00:28:17.833    Test: blockdev write zeroes read split ...passed
00:28:17.833    Test: blockdev write zeroes read split partial ...passed
00:28:17.833    Test: blockdev reset ...[2024-12-14 00:01:48.470794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:28:17.833  [2024-12-14 00:01:48.474393] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:28:17.833  passed
00:28:17.833    Test: blockdev write read 8 blocks ...passed
00:28:17.833    Test: blockdev write read size > 128k ...passed
00:28:17.833    Test: blockdev write read invalid size ...passed
00:28:17.833    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:28:17.833    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:28:17.833    Test: blockdev write read max offset ...passed
00:28:17.833    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:28:17.833    Test: blockdev writev readv 8 blocks ...passed
00:28:17.833    Test: blockdev writev readv 30 x 1block ...passed
00:28:17.833    Test: blockdev writev readv block ...passed
00:28:17.833    Test: blockdev writev readv size > 128k ...passed
00:28:17.833    Test: blockdev writev readv size > 128k in two iovs ...passed
00:28:17.833    Test: blockdev comparev and writev ...[2024-12-14 00:01:48.482307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3100d000 len:0x1000
00:28:17.833  [2024-12-14 00:01:48.482527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:28:17.833  passed
00:28:17.833    Test: blockdev nvme passthru rw ...passed
00:28:17.833    Test: blockdev nvme passthru vendor specific ...[2024-12-14 00:01:48.483571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:28:17.833  [2024-12-14 00:01:48.483782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:28:17.833  passed
00:28:17.833    Test: blockdev nvme admin passthru ...passed
00:28:17.833    Test: blockdev copy ...passed
00:28:17.833  
00:28:17.833  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:28:17.833                suites      1      1    n/a      0        0
00:28:17.833                 tests     23     23     23      0        0
00:28:17.833               asserts    152    152    152      0      n/a
00:28:17.833  
00:28:17.833  Elapsed time =    0.182 seconds
00:28:17.833  0
00:28:17.833   00:01:48	-- bdev/blockdev.sh@293 -- # killprocess 136693
00:28:17.833   00:01:48	-- common/autotest_common.sh@936 -- # '[' -z 136693 ']'
00:28:17.833   00:01:48	-- common/autotest_common.sh@940 -- # kill -0 136693
00:28:17.833    00:01:48	-- common/autotest_common.sh@941 -- # uname
00:28:17.833   00:01:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:17.833    00:01:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136693
00:28:17.833   00:01:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:17.833   00:01:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:17.833  killing process with pid 136693
00:28:17.833   00:01:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 136693'
00:28:17.833   00:01:48	-- common/autotest_common.sh@955 -- # kill 136693
00:28:17.833   00:01:48	-- common/autotest_common.sh@960 -- # wait 136693
00:28:19.213   00:01:49	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:28:19.213  
00:28:19.213  real	0m2.150s
00:28:19.213  user	0m4.912s
00:28:19.213  sys	0m0.383s
00:28:19.213   00:01:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:19.213   00:01:49	-- common/autotest_common.sh@10 -- # set +x
00:28:19.213  ************************************
00:28:19.213  END TEST bdev_bounds
00:28:19.213  ************************************
00:28:19.213   00:01:49	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:28:19.213   00:01:49	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:28:19.213   00:01:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:19.213   00:01:49	-- common/autotest_common.sh@10 -- # set +x
00:28:19.213  ************************************
00:28:19.213  START TEST bdev_nbd
00:28:19.213  ************************************
00:28:19.213   00:01:49	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:28:19.213    00:01:49	-- bdev/blockdev.sh@298 -- # uname -s
00:28:19.213   00:01:49	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:28:19.213   00:01:49	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:19.213   00:01:49	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:19.213   00:01:49	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1')
00:28:19.213   00:01:49	-- bdev/blockdev.sh@302 -- # local bdev_all
00:28:19.213   00:01:49	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:28:19.213   00:01:49	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:28:19.213   00:01:49	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:28:19.213   00:01:49	-- bdev/blockdev.sh@309 -- # local nbd_all
00:28:19.213   00:01:49	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:28:19.213   00:01:49	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:28:19.213   00:01:49	-- bdev/blockdev.sh@312 -- # local nbd_list
00:28:19.213   00:01:49	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1')
00:28:19.213   00:01:49	-- bdev/blockdev.sh@313 -- # local bdev_list
00:28:19.213   00:01:49	-- bdev/blockdev.sh@316 -- # nbd_pid=136758
00:28:19.213   00:01:49	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:28:19.213   00:01:49	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:28:19.213   00:01:49	-- bdev/blockdev.sh@318 -- # waitforlisten 136758 /var/tmp/spdk-nbd.sock
00:28:19.213   00:01:49	-- common/autotest_common.sh@829 -- # '[' -z 136758 ']'
00:28:19.213   00:01:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:28:19.213   00:01:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:19.213  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:28:19.213   00:01:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:28:19.213   00:01:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:19.213   00:01:49	-- common/autotest_common.sh@10 -- # set +x
00:28:19.213  [2024-12-14 00:01:49.641994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:19.213  [2024-12-14 00:01:49.642138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:19.213  [2024-12-14 00:01:49.797333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:19.472  [2024-12-14 00:01:49.981496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:20.040   00:01:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:20.040   00:01:50	-- common/autotest_common.sh@862 -- # return 0
00:28:20.040   00:01:50	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@24 -- # local i
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:28:20.040   00:01:50	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:28:20.040    00:01:50	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:28:20.299    00:01:50	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:28:20.299   00:01:50	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:28:20.299   00:01:50	-- common/autotest_common.sh@867 -- # local i
00:28:20.299   00:01:50	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:28:20.299   00:01:50	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:28:20.299   00:01:50	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:28:20.299   00:01:50	-- common/autotest_common.sh@871 -- # break
00:28:20.299   00:01:50	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:28:20.299   00:01:50	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:28:20.299   00:01:50	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:28:20.299  1+0 records in
00:28:20.299  1+0 records out
00:28:20.299  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561871 s, 7.3 MB/s
00:28:20.299    00:01:50	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:20.299   00:01:50	-- common/autotest_common.sh@884 -- # size=4096
00:28:20.299   00:01:50	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:20.299   00:01:50	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:28:20.299   00:01:50	-- common/autotest_common.sh@887 -- # return 0
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:28:20.299    00:01:50	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:28:20.299    {
00:28:20.299      "nbd_device": "/dev/nbd0",
00:28:20.299      "bdev_name": "Nvme0n1"
00:28:20.299    }
00:28:20.299  ]'
00:28:20.299   00:01:50	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:28:20.299    00:01:50	-- bdev/nbd_common.sh@119 -- # echo '[
00:28:20.299    {
00:28:20.299      "nbd_device": "/dev/nbd0",
00:28:20.299      "bdev_name": "Nvme0n1"
00:28:20.299    }
00:28:20.299  ]'
00:28:20.299    00:01:50	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@51 -- # local i
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:20.558   00:01:51	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:20.817    00:01:51	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@41 -- # break
00:28:20.817   00:01:51	-- bdev/nbd_common.sh@45 -- # return 0
00:28:20.817    00:01:51	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:20.817    00:01:51	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:20.817     00:01:51	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:21.076    00:01:51	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:28:21.076     00:01:51	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:28:21.076     00:01:51	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:21.076    00:01:51	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:28:21.076     00:01:51	-- bdev/nbd_common.sh@65 -- # echo ''
00:28:21.076     00:01:51	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:21.076     00:01:51	-- bdev/nbd_common.sh@65 -- # true
00:28:21.076    00:01:51	-- bdev/nbd_common.sh@65 -- # count=0
00:28:21.076    00:01:51	-- bdev/nbd_common.sh@66 -- # echo 0
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@122 -- # count=0
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@127 -- # return 0
00:28:21.076   00:01:51	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@12 -- # local i
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:28:21.076   00:01:51	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:28:21.336  /dev/nbd0
00:28:21.336    00:01:51	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:28:21.336   00:01:51	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:28:21.336   00:01:51	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:28:21.336   00:01:51	-- common/autotest_common.sh@867 -- # local i
00:28:21.336   00:01:51	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:28:21.336   00:01:51	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:28:21.336   00:01:51	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:28:21.336   00:01:51	-- common/autotest_common.sh@871 -- # break
00:28:21.336   00:01:51	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:28:21.336   00:01:51	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:28:21.336   00:01:51	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:28:21.336  1+0 records in
00:28:21.336  1+0 records out
00:28:21.336  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572775 s, 7.2 MB/s
00:28:21.336    00:01:51	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:21.336   00:01:51	-- common/autotest_common.sh@884 -- # size=4096
00:28:21.336   00:01:51	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:21.336   00:01:51	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:28:21.336   00:01:51	-- common/autotest_common.sh@887 -- # return 0
00:28:21.336   00:01:51	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:28:21.336   00:01:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:28:21.336    00:01:51	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:21.336    00:01:51	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:21.336     00:01:51	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:21.595    00:01:52	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:28:21.595    {
00:28:21.595      "nbd_device": "/dev/nbd0",
00:28:21.595      "bdev_name": "Nvme0n1"
00:28:21.595    }
00:28:21.595  ]'
00:28:21.595     00:01:52	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:21.595     00:01:52	-- bdev/nbd_common.sh@64 -- # echo '[
00:28:21.595    {
00:28:21.595      "nbd_device": "/dev/nbd0",
00:28:21.595      "bdev_name": "Nvme0n1"
00:28:21.595    }
00:28:21.595  ]'
00:28:21.595    00:01:52	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:28:21.595     00:01:52	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:28:21.595     00:01:52	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:21.595    00:01:52	-- bdev/nbd_common.sh@65 -- # count=1
00:28:21.595    00:01:52	-- bdev/nbd_common.sh@66 -- # echo 1
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@95 -- # count=1
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@71 -- # local operation=write
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:28:21.595  256+0 records in
00:28:21.595  256+0 records out
00:28:21.595  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108005 s, 97.1 MB/s
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:28:21.595  256+0 records in
00:28:21.595  256+0 records out
00:28:21.595  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0584706 s, 17.9 MB/s
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@51 -- # local i
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:21.595   00:01:52	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:21.854    00:01:52	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@41 -- # break
00:28:21.854   00:01:52	-- bdev/nbd_common.sh@45 -- # return 0
00:28:21.854    00:01:52	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:21.854    00:01:52	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:21.854     00:01:52	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:22.113    00:01:52	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:28:22.113     00:01:52	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:28:22.113     00:01:52	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:22.113    00:01:52	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:28:22.113     00:01:52	-- bdev/nbd_common.sh@65 -- # echo ''
00:28:22.113     00:01:52	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:22.372     00:01:52	-- bdev/nbd_common.sh@65 -- # true
00:28:22.372    00:01:52	-- bdev/nbd_common.sh@65 -- # count=0
00:28:22.372    00:01:52	-- bdev/nbd_common.sh@66 -- # echo 0
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@104 -- # count=0
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@109 -- # return 0
00:28:22.372   00:01:52	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:28:22.372   00:01:52	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:28:22.372  malloc_lvol_verify
00:28:22.372   00:01:53	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:28:22.630  8f8c31fc-7436-4251-a1ef-02e90a705edc
00:28:22.631   00:01:53	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:28:22.889  2467af29-a9e2-4e8a-82af-15166ca0139a
00:28:22.889   00:01:53	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:28:23.148  /dev/nbd0
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:28:23.148  mke2fs 1.46.5 (30-Dec-2021)
00:28:23.148  
00:28:23.148  Filesystem too small for a journal
00:28:23.148  Discarding device blocks:    0/1024         done                            
00:28:23.148  Creating filesystem with 1024 4k blocks and 1024 inodes
00:28:23.148  
00:28:23.148  Allocating group tables: 0/1   done                            
00:28:23.148  Writing inode tables: 0/1   done                            
00:28:23.148  Writing superblocks and filesystem accounting information: 0/1   done
00:28:23.148  
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@51 -- # local i
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:23.148   00:01:53	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:23.407    00:01:53	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@41 -- # break
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@45 -- # return 0
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:28:23.407   00:01:53	-- bdev/nbd_common.sh@147 -- # return 0
00:28:23.407   00:01:53	-- bdev/blockdev.sh@324 -- # killprocess 136758
00:28:23.407   00:01:53	-- common/autotest_common.sh@936 -- # '[' -z 136758 ']'
00:28:23.407   00:01:53	-- common/autotest_common.sh@940 -- # kill -0 136758
00:28:23.407    00:01:53	-- common/autotest_common.sh@941 -- # uname
00:28:23.407   00:01:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:23.407    00:01:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136758
00:28:23.407   00:01:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:23.407  killing process with pid 136758
00:28:23.407   00:01:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:23.407   00:01:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 136758'
00:28:23.407   00:01:53	-- common/autotest_common.sh@955 -- # kill 136758
00:28:23.407   00:01:53	-- common/autotest_common.sh@960 -- # wait 136758
00:28:24.344   00:01:55	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:28:24.344  
00:28:24.344  real	0m5.466s
00:28:24.344  user	0m7.855s
00:28:24.344  sys	0m1.134s
00:28:24.344   00:01:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:24.344   00:01:55	-- common/autotest_common.sh@10 -- # set +x
00:28:24.344  ************************************
00:28:24.344  END TEST bdev_nbd
00:28:24.345  ************************************
00:28:24.604   00:01:55	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:28:24.604   00:01:55	-- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']'
00:28:24.604  skipping fio tests on NVMe due to multi-ns failures.
00:28:24.604   00:01:55	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:28:24.604   00:01:55	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:24.604   00:01:55	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:28:24.604   00:01:55	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:28:24.604   00:01:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:24.604   00:01:55	-- common/autotest_common.sh@10 -- # set +x
00:28:24.604  ************************************
00:28:24.604  START TEST bdev_verify
00:28:24.604  ************************************
00:28:24.604   00:01:55	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:28:24.604  [2024-12-14 00:01:55.173137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:24.604  [2024-12-14 00:01:55.173334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136950 ]
00:28:24.862  [2024-12-14 00:01:55.346619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:24.862  [2024-12-14 00:01:55.533288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:24.862  [2024-12-14 00:01:55.533308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:25.430  Running I/O for 5 seconds...
00:28:30.705  
00:28:30.705                                                                                                  Latency(us)
00:28:30.705  
[2024-12-14T00:02:01.437Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:30.705  
[2024-12-14T00:02:01.437Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:28:30.705  	 Verification LBA range: start 0x0 length 0xa0000
00:28:30.705  	 Nvme0n1             :       5.01   13737.88      53.66       0.00     0.00    9280.41    1005.38   17158.52
00:28:30.705  
[2024-12-14T00:02:01.437Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:28:30.705  	 Verification LBA range: start 0xa0000 length 0xa0000
00:28:30.705  	 Nvme0n1             :       5.01   13771.01      53.79       0.00     0.00    9258.54     283.00   15192.44
00:28:30.705  
[2024-12-14T00:02:01.437Z]  ===================================================================================================================
00:28:30.705  
[2024-12-14T00:02:01.437Z]  Total                       :              27508.89     107.46       0.00     0.00    9269.46     283.00   17158.52
00:28:37.270  
00:28:37.270  real	0m12.308s
00:28:37.270  user	0m18.444s
00:28:37.270  sys	0m0.448s
00:28:37.270   00:02:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:37.271   00:02:07	-- common/autotest_common.sh@10 -- # set +x
00:28:37.271  ************************************
00:28:37.271  END TEST bdev_verify
00:28:37.271  ************************************
00:28:37.271   00:02:07	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:28:37.271   00:02:07	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:28:37.271   00:02:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:37.271   00:02:07	-- common/autotest_common.sh@10 -- # set +x
00:28:37.271  ************************************
00:28:37.271  START TEST bdev_verify_big_io
00:28:37.271  ************************************
00:28:37.271   00:02:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:28:37.271  [2024-12-14 00:02:07.541865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:37.271  [2024-12-14 00:02:07.542076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137088 ]
00:28:37.271  [2024-12-14 00:02:07.712528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:37.271  [2024-12-14 00:02:07.897878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:37.271  [2024-12-14 00:02:07.897897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:37.839  Running I/O for 5 seconds...
00:28:43.174  
00:28:43.174                                                                                                  Latency(us)
00:28:43.174  
[2024-12-14T00:02:13.906Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:43.174  
[2024-12-14T00:02:13.906Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:28:43.174  	 Verification LBA range: start 0x0 length 0xa000
00:28:43.174  	 Nvme0n1             :       5.02    2560.01     160.00       0.00     0.00   49417.86     562.27   74353.57
00:28:43.174  
[2024-12-14T00:02:13.906Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:28:43.174  	 Verification LBA range: start 0xa000 length 0xa000
00:28:43.174  	 Nvme0n1             :       5.03    1990.89     124.43       0.00     0.00   63404.99     621.85   98661.47
00:28:43.174  
[2024-12-14T00:02:13.906Z]  ===================================================================================================================
00:28:43.174  
[2024-12-14T00:02:13.906Z]  Total                       :               4550.89     284.43       0.00     0.00   55540.67     562.27   98661.47
00:28:44.135  
00:28:44.135  real	0m7.290s
00:28:44.135  user	0m13.411s
00:28:44.135  sys	0m0.273s
00:28:44.135   00:02:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:44.135   00:02:14	-- common/autotest_common.sh@10 -- # set +x
00:28:44.135  ************************************
00:28:44.135  END TEST bdev_verify_big_io
00:28:44.135  ************************************
00:28:44.135   00:02:14	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:44.135   00:02:14	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:44.135   00:02:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:44.135   00:02:14	-- common/autotest_common.sh@10 -- # set +x
00:28:44.135  ************************************
00:28:44.135  START TEST bdev_write_zeroes
00:28:44.135  ************************************
00:28:44.135   00:02:14	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:44.394  [2024-12-14 00:02:14.870124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:44.394  [2024-12-14 00:02:14.870272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137196 ]
00:28:44.394  [2024-12-14 00:02:15.023111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:44.652  [2024-12-14 00:02:15.205663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:44.909  Running I/O for 1 seconds...
00:28:46.282  
00:28:46.282                                                                                                  Latency(us)
00:28:46.282  
[2024-12-14T00:02:17.014Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:46.282  
[2024-12-14T00:02:17.014Z]  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:28:46.282  	 Nvme0n1             :       1.00   70630.44     275.90       0.00     0.00    1807.47     513.86   12273.11
00:28:46.282  
[2024-12-14T00:02:17.014Z]  ===================================================================================================================
00:28:46.282  
[2024-12-14T00:02:17.014Z]  Total                       :              70630.44     275.90       0.00     0.00    1807.47     513.86   12273.11
00:28:47.217  
00:28:47.217  real	0m2.804s
00:28:47.217  user	0m2.435s
00:28:47.217  sys	0m0.269s
00:28:47.217   00:02:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:47.217   00:02:17	-- common/autotest_common.sh@10 -- # set +x
00:28:47.217  ************************************
00:28:47.217  END TEST bdev_write_zeroes
00:28:47.217  ************************************
00:28:47.217   00:02:17	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:47.217   00:02:17	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:47.217   00:02:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:47.217   00:02:17	-- common/autotest_common.sh@10 -- # set +x
00:28:47.217  ************************************
00:28:47.217  START TEST bdev_json_nonenclosed
00:28:47.217  ************************************
00:28:47.217   00:02:17	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:47.217  [2024-12-14 00:02:17.728725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:47.217  [2024-12-14 00:02:17.728883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137253 ]
00:28:47.217  [2024-12-14 00:02:17.879876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:47.476  [2024-12-14 00:02:18.060625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:47.476  [2024-12-14 00:02:18.060843] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:28:47.476  [2024-12-14 00:02:18.060885] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:47.734  
00:28:47.734  real	0m0.719s
00:28:47.734  user	0m0.483s
00:28:47.734  sys	0m0.134s
00:28:47.734   00:02:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:47.734   00:02:18	-- common/autotest_common.sh@10 -- # set +x
00:28:47.734  ************************************
00:28:47.734  END TEST bdev_json_nonenclosed
00:28:47.734  ************************************
00:28:47.734   00:02:18	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:47.734   00:02:18	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:47.734   00:02:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:47.734   00:02:18	-- common/autotest_common.sh@10 -- # set +x
00:28:47.734  ************************************
00:28:47.734  START TEST bdev_json_nonarray
00:28:47.734  ************************************
00:28:47.734   00:02:18	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:47.993  [2024-12-14 00:02:18.507025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:47.993  [2024-12-14 00:02:18.507189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137292 ]
00:28:47.993  [2024-12-14 00:02:18.662192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:48.252  [2024-12-14 00:02:18.843350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:48.252  [2024-12-14 00:02:18.843580] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:28:48.252  [2024-12-14 00:02:18.843625] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:48.511  
00:28:48.511  real	0m0.723s
00:28:48.511  user	0m0.473s
00:28:48.511  sys	0m0.149s
00:28:48.511   00:02:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:48.511  ************************************
00:28:48.511  END TEST bdev_json_nonarray
00:28:48.511  ************************************
00:28:48.511   00:02:19	-- common/autotest_common.sh@10 -- # set +x
00:28:48.511   00:02:19	-- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:28:48.511   00:02:19	-- bdev/blockdev.sh@809 -- # cleanup
00:28:48.511   00:02:19	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:28:48.511   00:02:19	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:48.511   00:02:19	-- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@28 -- # [[ nvme == daos ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]]
00:28:48.511   00:02:19	-- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]]
00:28:48.511  
00:28:48.511  real	0m37.877s
00:28:48.511  user	0m53.976s
00:28:48.511  sys	0m3.948s
00:28:48.511   00:02:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:48.511   00:02:19	-- common/autotest_common.sh@10 -- # set +x
00:28:48.511  ************************************
00:28:48.511  END TEST blockdev_nvme
00:28:48.511  ************************************
00:28:48.771    00:02:19	-- spdk/autotest.sh@206 -- # uname -s
00:28:48.771   00:02:19	-- spdk/autotest.sh@206 -- # [[ Linux == Linux ]]
00:28:48.771   00:02:19	-- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:28:48.771   00:02:19	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:48.771   00:02:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:48.771   00:02:19	-- common/autotest_common.sh@10 -- # set +x
00:28:48.771  ************************************
00:28:48.771  START TEST blockdev_nvme_gpt
00:28:48.771  ************************************
00:28:48.771   00:02:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:28:48.771  * Looking for test storage...
00:28:48.771  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:28:48.771    00:02:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:48.771     00:02:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:48.771     00:02:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:48.771    00:02:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:48.771    00:02:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:48.771    00:02:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:48.771    00:02:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:48.771    00:02:19	-- scripts/common.sh@335 -- # IFS=.-:
00:28:48.771    00:02:19	-- scripts/common.sh@335 -- # read -ra ver1
00:28:48.771    00:02:19	-- scripts/common.sh@336 -- # IFS=.-:
00:28:48.771    00:02:19	-- scripts/common.sh@336 -- # read -ra ver2
00:28:48.771    00:02:19	-- scripts/common.sh@337 -- # local 'op=<'
00:28:48.771    00:02:19	-- scripts/common.sh@339 -- # ver1_l=2
00:28:48.771    00:02:19	-- scripts/common.sh@340 -- # ver2_l=1
00:28:48.771    00:02:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:48.771    00:02:19	-- scripts/common.sh@343 -- # case "$op" in
00:28:48.771    00:02:19	-- scripts/common.sh@344 -- # : 1
00:28:48.771    00:02:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:48.771    00:02:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:48.771     00:02:19	-- scripts/common.sh@364 -- # decimal 1
00:28:48.771     00:02:19	-- scripts/common.sh@352 -- # local d=1
00:28:48.771     00:02:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:48.771     00:02:19	-- scripts/common.sh@354 -- # echo 1
00:28:48.771    00:02:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:48.771     00:02:19	-- scripts/common.sh@365 -- # decimal 2
00:28:48.771     00:02:19	-- scripts/common.sh@352 -- # local d=2
00:28:48.771     00:02:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:48.771     00:02:19	-- scripts/common.sh@354 -- # echo 2
00:28:48.771    00:02:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:48.771    00:02:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:48.771    00:02:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:48.771    00:02:19	-- scripts/common.sh@367 -- # return 0
00:28:48.771    00:02:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:48.771    00:02:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:48.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:48.771  		--rc genhtml_branch_coverage=1
00:28:48.771  		--rc genhtml_function_coverage=1
00:28:48.771  		--rc genhtml_legend=1
00:28:48.771  		--rc geninfo_all_blocks=1
00:28:48.771  		--rc geninfo_unexecuted_blocks=1
00:28:48.771  		
00:28:48.771  		'
00:28:48.771    00:02:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:48.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:48.771  		--rc genhtml_branch_coverage=1
00:28:48.771  		--rc genhtml_function_coverage=1
00:28:48.771  		--rc genhtml_legend=1
00:28:48.771  		--rc geninfo_all_blocks=1
00:28:48.771  		--rc geninfo_unexecuted_blocks=1
00:28:48.771  		
00:28:48.771  		'
00:28:48.771    00:02:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:48.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:48.771  		--rc genhtml_branch_coverage=1
00:28:48.771  		--rc genhtml_function_coverage=1
00:28:48.771  		--rc genhtml_legend=1
00:28:48.771  		--rc geninfo_all_blocks=1
00:28:48.771  		--rc geninfo_unexecuted_blocks=1
00:28:48.771  		
00:28:48.771  		'
00:28:48.771    00:02:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:48.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:48.771  		--rc genhtml_branch_coverage=1
00:28:48.771  		--rc genhtml_function_coverage=1
00:28:48.771  		--rc genhtml_legend=1
00:28:48.771  		--rc geninfo_all_blocks=1
00:28:48.771  		--rc geninfo_unexecuted_blocks=1
00:28:48.771  		
00:28:48.771  		'
00:28:48.771   00:02:19	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:28:48.771    00:02:19	-- bdev/nbd_common.sh@6 -- # set -e
00:28:48.771   00:02:19	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:28:48.771   00:02:19	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:48.771   00:02:19	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:28:48.771   00:02:19	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:28:48.771   00:02:19	-- bdev/blockdev.sh@18 -- # :
00:28:48.771   00:02:19	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:28:48.771   00:02:19	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:28:48.771   00:02:19	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:28:48.771    00:02:19	-- bdev/blockdev.sh@672 -- # uname -s
00:28:48.771   00:02:19	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:28:48.771   00:02:19	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:28:48.771   00:02:19	-- bdev/blockdev.sh@680 -- # test_type=gpt
00:28:48.771   00:02:19	-- bdev/blockdev.sh@681 -- # crypto_device=
00:28:48.771   00:02:19	-- bdev/blockdev.sh@682 -- # dek=
00:28:48.771   00:02:19	-- bdev/blockdev.sh@683 -- # env_ctx=
00:28:48.771   00:02:19	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:28:48.771   00:02:19	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:28:48.771   00:02:19	-- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]]
00:28:48.771   00:02:19	-- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]]
00:28:48.771   00:02:19	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:28:48.771   00:02:19	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=137387
00:28:48.771   00:02:19	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:28:48.771   00:02:19	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:28:48.771   00:02:19	-- bdev/blockdev.sh@47 -- # waitforlisten 137387
00:28:48.772   00:02:19	-- common/autotest_common.sh@829 -- # '[' -z 137387 ']'
00:28:48.772   00:02:19	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:48.772   00:02:19	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:48.772  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:48.772   00:02:19	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:48.772   00:02:19	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:48.772   00:02:19	-- common/autotest_common.sh@10 -- # set +x
00:28:48.772  [2024-12-14 00:02:19.495029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:48.772  [2024-12-14 00:02:19.495197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137387 ]
00:28:49.030  [2024-12-14 00:02:19.650797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:49.289  [2024-12-14 00:02:19.828055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:49.289  [2024-12-14 00:02:19.828290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:50.665   00:02:21	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:50.665   00:02:21	-- common/autotest_common.sh@862 -- # return 0
00:28:50.665   00:02:21	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:28:50.665   00:02:21	-- bdev/blockdev.sh@700 -- # setup_gpt_conf
00:28:50.665   00:02:21	-- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:28:50.665  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:50.935  Waiting for block devices as requested
00:28:50.935  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:28:50.935   00:02:21	-- bdev/blockdev.sh@103 -- # get_zoned_devs
00:28:50.936   00:02:21	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:28:50.936   00:02:21	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:28:50.936   00:02:21	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:28:50.936   00:02:21	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:28:50.936   00:02:21	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:28:50.936   00:02:21	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:28:50.936   00:02:21	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:28:50.936   00:02:21	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:28:50.936   00:02:21	-- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1')
00:28:50.936   00:02:21	-- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev
00:28:50.936   00:02:21	-- bdev/blockdev.sh@106 -- # gpt_nvme=
00:28:50.936   00:02:21	-- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}"
00:28:50.936   00:02:21	-- bdev/blockdev.sh@109 -- # [[ -z '' ]]
00:28:50.936   00:02:21	-- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1
00:28:50.936    00:02:21	-- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print
00:28:50.936   00:02:21	-- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:28:50.936  BYT;
00:28:50.936  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:28:50.936   00:02:21	-- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:28:50.936  BYT;
00:28:50.936  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:28:50.936   00:02:21	-- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1
00:28:50.936   00:02:21	-- bdev/blockdev.sh@114 -- # break
00:28:50.936   00:02:21	-- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]]
00:28:50.936   00:02:21	-- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:28:50.936   00:02:21	-- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:28:50.936   00:02:21	-- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:28:51.194    00:02:21	-- bdev/blockdev.sh@128 -- # get_spdk_gpt_old
00:28:51.194    00:02:21	-- scripts/common.sh@410 -- # local spdk_guid
00:28:51.194    00:02:21	-- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:28:51.194    00:02:21	-- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:28:51.194    00:02:21	-- scripts/common.sh@415 -- # IFS='()'
00:28:51.194    00:02:21	-- scripts/common.sh@415 -- # read -r _ spdk_guid _
00:28:51.194     00:02:21	-- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:28:51.194    00:02:21	-- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:28:51.194    00:02:21	-- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:28:51.194    00:02:21	-- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:28:51.194   00:02:21	-- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:28:51.194    00:02:21	-- bdev/blockdev.sh@129 -- # get_spdk_gpt
00:28:51.194    00:02:21	-- scripts/common.sh@422 -- # local spdk_guid
00:28:51.194    00:02:21	-- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:28:51.194    00:02:21	-- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:28:51.194    00:02:21	-- scripts/common.sh@427 -- # IFS='()'
00:28:51.194    00:02:21	-- scripts/common.sh@427 -- # read -r _ spdk_guid _
00:28:51.194     00:02:21	-- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:28:51.194    00:02:21	-- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:28:51.194    00:02:21	-- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:28:51.194    00:02:21	-- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:28:51.194   00:02:21	-- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:28:51.194   00:02:21	-- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:28:52.570  The operation has completed successfully.
00:28:52.570   00:02:22	-- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:28:53.506  The operation has completed successfully.
00:28:53.506   00:02:23	-- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:28:53.506  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:53.765  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:28:54.704   00:02:25	-- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs
00:28:54.704   00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704   00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704  []
00:28:54.704   00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704   00:02:25	-- bdev/blockdev.sh@134 -- # setup_nvme_conf
00:28:54.704   00:02:25	-- bdev/blockdev.sh@79 -- # local json
00:28:54.704   00:02:25	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:28:54.704    00:02:25	-- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:28:54.704   00:02:25	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\'''
00:28:54.704   00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704   00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704   00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704   00:02:25	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:28:54.704   00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704   00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704   00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704   00:02:25	-- bdev/blockdev.sh@738 -- # cat
00:28:54.704    00:02:25	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:28:54.704    00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704    00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704    00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704    00:02:25	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:28:54.704    00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704    00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704    00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704    00:02:25	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:28:54.704    00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704    00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704    00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.704   00:02:25	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:28:54.704    00:02:25	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:28:54.704    00:02:25	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:28:54.704    00:02:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:54.704    00:02:25	-- common/autotest_common.sh@10 -- # set +x
00:28:54.704    00:02:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:54.705   00:02:25	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:28:54.705    00:02:25	-- bdev/blockdev.sh@747 -- # jq -r .name
00:28:54.705    00:02:25	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:28:54.965   00:02:25	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:28:54.965   00:02:25	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1
00:28:54.965   00:02:25	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:28:54.965   00:02:25	-- bdev/blockdev.sh@752 -- # killprocess 137387
00:28:54.965   00:02:25	-- common/autotest_common.sh@936 -- # '[' -z 137387 ']'
00:28:54.965   00:02:25	-- common/autotest_common.sh@940 -- # kill -0 137387
00:28:54.965    00:02:25	-- common/autotest_common.sh@941 -- # uname
00:28:54.965   00:02:25	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:54.965    00:02:25	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137387
00:28:54.965   00:02:25	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:54.965   00:02:25	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:54.965  killing process with pid 137387
00:28:54.965   00:02:25	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 137387'
00:28:54.965   00:02:25	-- common/autotest_common.sh@955 -- # kill 137387
00:28:54.965   00:02:25	-- common/autotest_common.sh@960 -- # wait 137387
00:28:56.869   00:02:27	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:56.869   00:02:27	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:28:56.869   00:02:27	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:28:56.869   00:02:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:56.869   00:02:27	-- common/autotest_common.sh@10 -- # set +x
00:28:56.869  ************************************
00:28:56.869  START TEST bdev_hello_world
00:28:56.869  ************************************
00:28:56.869   00:02:27	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:28:56.869  [2024-12-14 00:02:27.441764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:56.869  [2024-12-14 00:02:27.441926] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137822 ]
00:28:56.869  [2024-12-14 00:02:27.594432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:57.127  [2024-12-14 00:02:27.773304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:57.695  [2024-12-14 00:02:28.181609] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:28:57.695  [2024-12-14 00:02:28.181685] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:28:57.695  [2024-12-14 00:02:28.181733] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:28:57.695  [2024-12-14 00:02:28.185917] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:28:57.695  [2024-12-14 00:02:28.186364] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:28:57.695  [2024-12-14 00:02:28.186405] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:28:57.695  [2024-12-14 00:02:28.186610] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:28:57.695  
00:28:57.695  [2024-12-14 00:02:28.186645] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:28:58.631  
00:28:58.631  real	0m1.801s
00:28:58.631  user	0m1.445s
00:28:58.631  sys	0m0.257s
00:28:58.631   00:02:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:58.631   00:02:29	-- common/autotest_common.sh@10 -- # set +x
00:28:58.631  ************************************
00:28:58.631  END TEST bdev_hello_world
00:28:58.631  ************************************
00:28:58.631   00:02:29	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:28:58.631   00:02:29	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:58.631   00:02:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:58.631   00:02:29	-- common/autotest_common.sh@10 -- # set +x
00:28:58.631  ************************************
00:28:58.631  START TEST bdev_bounds
00:28:58.631  ************************************
00:28:58.631   00:02:29	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:28:58.631   00:02:29	-- bdev/blockdev.sh@288 -- # bdevio_pid=137863
00:28:58.631   00:02:29	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:28:58.631  Process bdevio pid: 137863
00:28:58.631   00:02:29	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:28:58.631   00:02:29	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 137863'
00:28:58.631   00:02:29	-- bdev/blockdev.sh@291 -- # waitforlisten 137863
00:28:58.631   00:02:29	-- common/autotest_common.sh@829 -- # '[' -z 137863 ']'
00:28:58.631   00:02:29	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:58.631   00:02:29	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:58.631  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:58.631   00:02:29	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:58.631   00:02:29	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:58.631   00:02:29	-- common/autotest_common.sh@10 -- # set +x
00:28:58.631  [2024-12-14 00:02:29.310138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:28:58.631  [2024-12-14 00:02:29.310315] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137863 ]
00:28:58.890  [2024-12-14 00:02:29.489903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:28:59.148  [2024-12-14 00:02:29.666741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:59.148  [2024-12-14 00:02:29.666887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:28:59.148  [2024-12-14 00:02:29.666889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:59.716   00:02:30	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:59.716   00:02:30	-- common/autotest_common.sh@862 -- # return 0
00:28:59.716   00:02:30	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:28:59.716  I/O targets:
00:28:59.716    Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:28:59.716    Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:28:59.716  
00:28:59.716  
00:28:59.716       CUnit - A unit testing framework for C - Version 2.1-3
00:28:59.716       http://cunit.sourceforge.net/
00:28:59.716  
00:28:59.716  
00:28:59.716  Suite: bdevio tests on: Nvme0n1p2
00:28:59.716    Test: blockdev write read block ...passed
00:28:59.716    Test: blockdev write zeroes read block ...passed
00:28:59.716    Test: blockdev write zeroes read no split ...passed
00:28:59.716    Test: blockdev write zeroes read split ...passed
00:28:59.716    Test: blockdev write zeroes read split partial ...passed
00:28:59.716    Test: blockdev reset ...[2024-12-14 00:02:30.320626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:28:59.716  [2024-12-14 00:02:30.323986] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:28:59.716  passed
00:28:59.716    Test: blockdev write read 8 blocks ...passed
00:28:59.716    Test: blockdev write read size > 128k ...passed
00:28:59.716    Test: blockdev write read invalid size ...passed
00:28:59.716    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:28:59.716    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:28:59.716    Test: blockdev write read max offset ...passed
00:28:59.716    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:28:59.716    Test: blockdev writev readv 8 blocks ...passed
00:28:59.716    Test: blockdev writev readv 30 x 1block ...passed
00:28:59.716    Test: blockdev writev readv block ...passed
00:28:59.716    Test: blockdev writev readv size > 128k ...passed
00:28:59.716    Test: blockdev writev readv size > 128k in two iovs ...passed
00:28:59.716    Test: blockdev comparev and writev ...[2024-12-14 00:02:30.332593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27a0b000 len:0x1000
00:28:59.716  [2024-12-14 00:02:30.332681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:28:59.716  passed
00:28:59.716    Test: blockdev nvme passthru rw ...passed
00:28:59.716    Test: blockdev nvme passthru vendor specific ...passed
00:28:59.716    Test: blockdev nvme admin passthru ...passed
00:28:59.716    Test: blockdev copy ...passed
00:28:59.716  Suite: bdevio tests on: Nvme0n1p1
00:28:59.716    Test: blockdev write read block ...passed
00:28:59.716    Test: blockdev write zeroes read block ...passed
00:28:59.716    Test: blockdev write zeroes read no split ...passed
00:28:59.716    Test: blockdev write zeroes read split ...passed
00:28:59.716    Test: blockdev write zeroes read split partial ...passed
00:28:59.716    Test: blockdev reset ...[2024-12-14 00:02:30.379434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:28:59.716  [2024-12-14 00:02:30.382456] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:28:59.716  passed
00:28:59.716    Test: blockdev write read 8 blocks ...passed
00:28:59.716    Test: blockdev write read size > 128k ...passed
00:28:59.716    Test: blockdev write read invalid size ...passed
00:28:59.716    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:28:59.716    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:28:59.716    Test: blockdev write read max offset ...passed
00:28:59.716    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:28:59.716    Test: blockdev writev readv 8 blocks ...passed
00:28:59.716    Test: blockdev writev readv 30 x 1block ...passed
00:28:59.716    Test: blockdev writev readv block ...passed
00:28:59.716    Test: blockdev writev readv size > 128k ...passed
00:28:59.716    Test: blockdev writev readv size > 128k in two iovs ...passed
00:28:59.716    Test: blockdev comparev and writev ...[2024-12-14 00:02:30.390810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27a0d000 len:0x1000
00:28:59.716  [2024-12-14 00:02:30.390885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:28:59.716  passed
00:28:59.716    Test: blockdev nvme passthru rw ...passed
00:28:59.716    Test: blockdev nvme passthru vendor specific ...passed
00:28:59.716    Test: blockdev nvme admin passthru ...passed
00:28:59.716    Test: blockdev copy ...passed
00:28:59.716  
00:28:59.716  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:28:59.716                suites      2      2    n/a      0        0
00:28:59.716                 tests     46     46     46      0        0
00:28:59.716               asserts    284    284    284      0      n/a
00:28:59.716  
00:28:59.716  Elapsed time =    0.330 seconds
00:28:59.716  0
00:28:59.716   00:02:30	-- bdev/blockdev.sh@293 -- # killprocess 137863
00:28:59.716   00:02:30	-- common/autotest_common.sh@936 -- # '[' -z 137863 ']'
00:28:59.716   00:02:30	-- common/autotest_common.sh@940 -- # kill -0 137863
00:28:59.716    00:02:30	-- common/autotest_common.sh@941 -- # uname
00:28:59.716   00:02:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:59.716    00:02:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137863
00:28:59.716   00:02:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:59.716   00:02:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:59.716   00:02:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 137863'
00:28:59.716  killing process with pid 137863
00:28:59.716   00:02:30	-- common/autotest_common.sh@955 -- # kill 137863
00:28:59.716   00:02:30	-- common/autotest_common.sh@960 -- # wait 137863
00:29:01.094   00:02:31	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:29:01.094  
00:29:01.094  real	0m2.183s
00:29:01.094  user	0m4.999s
00:29:01.094  sys	0m0.372s
00:29:01.094   00:02:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:01.094   00:02:31	-- common/autotest_common.sh@10 -- # set +x
00:29:01.094  ************************************
00:29:01.094  END TEST bdev_bounds
00:29:01.094  ************************************
00:29:01.094   00:02:31	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:29:01.094   00:02:31	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:29:01.094   00:02:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:01.094   00:02:31	-- common/autotest_common.sh@10 -- # set +x
00:29:01.094  ************************************
00:29:01.094  START TEST bdev_nbd
00:29:01.094  ************************************
00:29:01.094   00:02:31	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:29:01.094    00:02:31	-- bdev/blockdev.sh@298 -- # uname -s
00:29:01.094   00:02:31	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:29:01.094   00:02:31	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:01.094   00:02:31	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:29:01.094   00:02:31	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:29:01.094   00:02:31	-- bdev/blockdev.sh@302 -- # local bdev_all
00:29:01.094   00:02:31	-- bdev/blockdev.sh@303 -- # local bdev_num=2
00:29:01.094   00:02:31	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:29:01.094   00:02:31	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:29:01.094   00:02:31	-- bdev/blockdev.sh@309 -- # local nbd_all
00:29:01.094   00:02:31	-- bdev/blockdev.sh@310 -- # bdev_num=2
00:29:01.094   00:02:31	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:01.094   00:02:31	-- bdev/blockdev.sh@312 -- # local nbd_list
00:29:01.094   00:02:31	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:29:01.094   00:02:31	-- bdev/blockdev.sh@313 -- # local bdev_list
00:29:01.094   00:02:31	-- bdev/blockdev.sh@316 -- # nbd_pid=137932
00:29:01.094   00:02:31	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:29:01.094   00:02:31	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:29:01.094   00:02:31	-- bdev/blockdev.sh@318 -- # waitforlisten 137932 /var/tmp/spdk-nbd.sock
00:29:01.094   00:02:31	-- common/autotest_common.sh@829 -- # '[' -z 137932 ']'
00:29:01.094   00:02:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:29:01.094   00:02:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:29:01.094   00:02:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:29:01.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:29:01.094   00:02:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:29:01.094   00:02:31	-- common/autotest_common.sh@10 -- # set +x
00:29:01.094  [2024-12-14 00:02:31.541294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:01.094  [2024-12-14 00:02:31.541426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:01.094  [2024-12-14 00:02:31.687511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:01.353  [2024-12-14 00:02:31.864550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:01.920   00:02:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:29:01.920   00:02:32	-- common/autotest_common.sh@862 -- # return 0
00:29:01.920   00:02:32	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@24 -- # local i
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:29:01.920    00:02:32	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:29:01.920    00:02:32	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:29:01.920   00:02:32	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:29:01.920   00:02:32	-- common/autotest_common.sh@867 -- # local i
00:29:01.920   00:02:32	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:29:01.920   00:02:32	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:29:01.920   00:02:32	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:29:01.920   00:02:32	-- common/autotest_common.sh@871 -- # break
00:29:01.920   00:02:32	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:29:01.920   00:02:32	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:29:01.920   00:02:32	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:29:01.920  1+0 records in
00:29:01.920  1+0 records out
00:29:01.920  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516201 s, 7.9 MB/s
00:29:01.920    00:02:32	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:01.920   00:02:32	-- common/autotest_common.sh@884 -- # size=4096
00:29:01.920   00:02:32	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:01.920   00:02:32	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:29:01.920   00:02:32	-- common/autotest_common.sh@887 -- # return 0
00:29:01.920   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:29:02.179   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:29:02.179    00:02:32	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:29:02.439   00:02:32	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:29:02.439    00:02:32	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:29:02.439   00:02:32	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:29:02.439   00:02:32	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:29:02.439   00:02:32	-- common/autotest_common.sh@867 -- # local i
00:29:02.439   00:02:32	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:29:02.439   00:02:32	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:29:02.439   00:02:32	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:29:02.439   00:02:32	-- common/autotest_common.sh@871 -- # break
00:29:02.439   00:02:32	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:29:02.439   00:02:32	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:29:02.439   00:02:32	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:29:02.439  1+0 records in
00:29:02.439  1+0 records out
00:29:02.439  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838954 s, 4.9 MB/s
00:29:02.439    00:02:32	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:02.439   00:02:32	-- common/autotest_common.sh@884 -- # size=4096
00:29:02.439   00:02:32	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:02.439   00:02:32	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:29:02.439   00:02:32	-- common/autotest_common.sh@887 -- # return 0
00:29:02.439   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:29:02.439   00:02:32	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:29:02.439    00:02:32	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:29:02.439   00:02:33	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:29:02.439    {
00:29:02.439      "nbd_device": "/dev/nbd0",
00:29:02.439      "bdev_name": "Nvme0n1p1"
00:29:02.439    },
00:29:02.439    {
00:29:02.439      "nbd_device": "/dev/nbd1",
00:29:02.439      "bdev_name": "Nvme0n1p2"
00:29:02.439    }
00:29:02.439  ]'
00:29:02.439   00:02:33	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:29:02.439    00:02:33	-- bdev/nbd_common.sh@119 -- # echo '[
00:29:02.439    {
00:29:02.439      "nbd_device": "/dev/nbd0",
00:29:02.439      "bdev_name": "Nvme0n1p1"
00:29:02.439    },
00:29:02.439    {
00:29:02.439      "nbd_device": "/dev/nbd1",
00:29:02.439      "bdev_name": "Nvme0n1p2"
00:29:02.439    }
00:29:02.439  ]'
00:29:02.439    00:02:33	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@51 -- # local i
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:29:02.698    00:02:33	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:29:02.698   00:02:33	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@41 -- # break
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@45 -- # return 0
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:29:02.957    00:02:33	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@41 -- # break
00:29:02.957   00:02:33	-- bdev/nbd_common.sh@45 -- # return 0
00:29:02.957    00:02:33	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:29:02.957    00:02:33	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:02.957     00:02:33	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:29:03.216    00:02:33	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:29:03.216     00:02:33	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:29:03.216     00:02:33	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:29:03.475    00:02:33	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:29:03.475     00:02:33	-- bdev/nbd_common.sh@65 -- # echo ''
00:29:03.475     00:02:33	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:29:03.475     00:02:33	-- bdev/nbd_common.sh@65 -- # true
00:29:03.475    00:02:33	-- bdev/nbd_common.sh@65 -- # count=0
00:29:03.475    00:02:33	-- bdev/nbd_common.sh@66 -- # echo 0
00:29:03.475   00:02:33	-- bdev/nbd_common.sh@122 -- # count=0
00:29:03.475   00:02:33	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:29:03.475   00:02:33	-- bdev/nbd_common.sh@127 -- # return 0
00:29:03.475   00:02:33	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@12 -- # local i
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:29:03.476   00:02:33	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:29:03.735  /dev/nbd0
00:29:03.735    00:02:34	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:29:03.735   00:02:34	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:29:03.735   00:02:34	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:29:03.735   00:02:34	-- common/autotest_common.sh@867 -- # local i
00:29:03.735   00:02:34	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:29:03.735   00:02:34	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:29:03.735   00:02:34	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:29:03.735   00:02:34	-- common/autotest_common.sh@871 -- # break
00:29:03.735   00:02:34	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:29:03.735   00:02:34	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:29:03.735   00:02:34	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:29:03.735  1+0 records in
00:29:03.735  1+0 records out
00:29:03.735  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576517 s, 7.1 MB/s
00:29:03.735    00:02:34	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:03.735   00:02:34	-- common/autotest_common.sh@884 -- # size=4096
00:29:03.735   00:02:34	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:03.735   00:02:34	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:29:03.735   00:02:34	-- common/autotest_common.sh@887 -- # return 0
00:29:03.735   00:02:34	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:29:03.735   00:02:34	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:29:03.735   00:02:34	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:29:03.994  /dev/nbd1
00:29:03.994    00:02:34	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:29:03.994   00:02:34	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:29:03.994   00:02:34	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:29:03.994   00:02:34	-- common/autotest_common.sh@867 -- # local i
00:29:03.994   00:02:34	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:29:03.994   00:02:34	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:29:03.994   00:02:34	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:29:03.994   00:02:34	-- common/autotest_common.sh@871 -- # break
00:29:03.994   00:02:34	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:29:03.994   00:02:34	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:29:03.994   00:02:34	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:29:03.994  1+0 records in
00:29:03.994  1+0 records out
00:29:03.994  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625083 s, 6.6 MB/s
00:29:03.994    00:02:34	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:03.994   00:02:34	-- common/autotest_common.sh@884 -- # size=4096
00:29:03.994   00:02:34	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:29:03.994   00:02:34	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:29:03.994   00:02:34	-- common/autotest_common.sh@887 -- # return 0
00:29:03.994   00:02:34	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:29:03.994   00:02:34	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:29:03.994    00:02:34	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:29:03.994    00:02:34	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:03.994     00:02:34	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:29:04.274    00:02:34	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:29:04.274    {
00:29:04.274      "nbd_device": "/dev/nbd0",
00:29:04.274      "bdev_name": "Nvme0n1p1"
00:29:04.274    },
00:29:04.274    {
00:29:04.274      "nbd_device": "/dev/nbd1",
00:29:04.274      "bdev_name": "Nvme0n1p2"
00:29:04.274    }
00:29:04.274  ]'
00:29:04.274     00:02:34	-- bdev/nbd_common.sh@64 -- # echo '[
00:29:04.274    {
00:29:04.274      "nbd_device": "/dev/nbd0",
00:29:04.274      "bdev_name": "Nvme0n1p1"
00:29:04.274    },
00:29:04.274    {
00:29:04.274      "nbd_device": "/dev/nbd1",
00:29:04.274      "bdev_name": "Nvme0n1p2"
00:29:04.274    }
00:29:04.274  ]'
00:29:04.274     00:02:34	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:29:04.274    00:02:34	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:29:04.274  /dev/nbd1'
00:29:04.274     00:02:34	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:29:04.274  /dev/nbd1'
00:29:04.274     00:02:34	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:29:04.274    00:02:34	-- bdev/nbd_common.sh@65 -- # count=2
00:29:04.274    00:02:34	-- bdev/nbd_common.sh@66 -- # echo 2
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@95 -- # count=2
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@71 -- # local operation=write
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:29:04.274  256+0 records in
00:29:04.274  256+0 records out
00:29:04.274  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743225 s, 141 MB/s
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:29:04.274  256+0 records in
00:29:04.274  256+0 records out
00:29:04.274  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0794927 s, 13.2 MB/s
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:29:04.274   00:02:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:29:04.571  256+0 records in
00:29:04.571  256+0 records out
00:29:04.571  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0731776 s, 14.3 MB/s
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@51 -- # local i
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:29:04.571   00:02:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:29:04.846    00:02:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@41 -- # break
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@45 -- # return 0
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:29:04.846    00:02:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@41 -- # break
00:29:04.846   00:02:35	-- bdev/nbd_common.sh@45 -- # return 0
00:29:04.846    00:02:35	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:29:04.846    00:02:35	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:04.846     00:02:35	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:29:05.105    00:02:35	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:29:05.105     00:02:35	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:29:05.105     00:02:35	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:29:05.365    00:02:35	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:29:05.365     00:02:35	-- bdev/nbd_common.sh@65 -- # echo ''
00:29:05.365     00:02:35	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:29:05.365     00:02:35	-- bdev/nbd_common.sh@65 -- # true
00:29:05.365    00:02:35	-- bdev/nbd_common.sh@65 -- # count=0
00:29:05.365    00:02:35	-- bdev/nbd_common.sh@66 -- # echo 0
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@104 -- # count=0
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@109 -- # return 0
00:29:05.365   00:02:35	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:29:05.365   00:02:35	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:29:05.623  malloc_lvol_verify
00:29:05.623   00:02:36	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:29:05.881  758e64cd-2bb6-45ea-b12c-d6a846bc0a54
00:29:05.881   00:02:36	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:29:05.881  8c760d5b-94cf-4c01-8c31-f2ff28b8aac0
00:29:05.881   00:02:36	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:29:06.140  /dev/nbd0
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:29:06.140  mke2fs 1.46.5 (30-Dec-2021)
00:29:06.140  
00:29:06.140  Filesystem too small for a journal
00:29:06.140  Discarding device blocks:    0/1024         done                            
00:29:06.140  Creating filesystem with 1024 4k blocks and 1024 inodes
00:29:06.140  
00:29:06.140  Allocating group tables: 0/1   done                            
00:29:06.140  Writing inode tables: 0/1   done                            
00:29:06.140  Writing superblocks and filesystem accounting information: 0/1   done
00:29:06.140  
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@51 -- # local i
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:29:06.140   00:02:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:29:06.399    00:02:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@41 -- # break
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@45 -- # return 0
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:29:06.399   00:02:37	-- bdev/nbd_common.sh@147 -- # return 0
00:29:06.399   00:02:37	-- bdev/blockdev.sh@324 -- # killprocess 137932
00:29:06.399   00:02:37	-- common/autotest_common.sh@936 -- # '[' -z 137932 ']'
00:29:06.399   00:02:37	-- common/autotest_common.sh@940 -- # kill -0 137932
00:29:06.399    00:02:37	-- common/autotest_common.sh@941 -- # uname
00:29:06.399   00:02:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:29:06.399    00:02:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137932
00:29:06.399   00:02:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:29:06.399   00:02:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:29:06.399   00:02:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 137932'
00:29:06.399  killing process with pid 137932
00:29:06.399   00:02:37	-- common/autotest_common.sh@955 -- # kill 137932
00:29:06.399   00:02:37	-- common/autotest_common.sh@960 -- # wait 137932
00:29:07.775   00:02:38	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:29:07.775  
00:29:07.775  real	0m6.637s
00:29:07.775  user	0m9.480s
00:29:07.775  sys	0m1.645s
00:29:07.775  ************************************
00:29:07.775  END TEST bdev_nbd
00:29:07.775  ************************************
00:29:07.775   00:02:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:07.775   00:02:38	-- common/autotest_common.sh@10 -- # set +x
00:29:07.775   00:02:38	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:29:07.775   00:02:38	-- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']'
00:29:07.775  skipping fio tests on NVMe due to multi-ns failures.
00:29:07.775   00:02:38	-- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']'
00:29:07.775   00:02:38	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:29:07.775   00:02:38	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:29:07.775   00:02:38	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:29:07.775   00:02:38	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:29:07.775   00:02:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:07.775   00:02:38	-- common/autotest_common.sh@10 -- # set +x
00:29:07.775  ************************************
00:29:07.775  START TEST bdev_verify
00:29:07.775  ************************************
00:29:07.775   00:02:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:29:07.775  [2024-12-14 00:02:38.225722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:07.775  [2024-12-14 00:02:38.225918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138182 ]
00:29:07.775  [2024-12-14 00:02:38.398167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:08.034  [2024-12-14 00:02:38.575141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:29:08.034  [2024-12-14 00:02:38.575160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:08.293  Running I/O for 5 seconds...
00:29:13.564  
00:29:13.564                                                                                                  Latency(us)
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:29:13.564  	 Verification LBA range: start 0x0 length 0x4ff80
00:29:13.564  	 Nvme0n1p1           :       5.02    5417.58      21.16       0.00     0.00   23568.56    1638.40   27286.81
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:29:13.564  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:29:13.564  	 Nvme0n1p1           :       5.02    5421.03      21.18       0.00     0.00   23552.77    2308.65   25380.31
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:29:13.564  	 Verification LBA range: start 0x0 length 0x4ff7f
00:29:13.564  	 Nvme0n1p2           :       5.02    5424.13      21.19       0.00     0.00   23502.45     389.12   22043.93
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:29:13.564  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:29:13.564  	 Nvme0n1p2           :       5.02    5418.66      21.17       0.00     0.00   23539.14    2546.97   24069.59
00:29:13.564  
[2024-12-14T00:02:44.296Z]  ===================================================================================================================
00:29:13.564  
[2024-12-14T00:02:44.296Z]  Total                       :              21681.41      84.69       0.00     0.00   23540.72     389.12   27286.81
00:29:16.855  
00:29:16.855  real	0m8.944s
00:29:16.855  user	0m15.300s
00:29:16.855  sys	0m0.331s
00:29:16.855   00:02:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:16.855   00:02:47	-- common/autotest_common.sh@10 -- # set +x
00:29:16.855  ************************************
00:29:16.855  END TEST bdev_verify
00:29:16.855  ************************************
00:29:16.855   00:02:47	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:29:16.855   00:02:47	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:29:16.855   00:02:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:16.855   00:02:47	-- common/autotest_common.sh@10 -- # set +x
00:29:16.855  ************************************
00:29:16.855  START TEST bdev_verify_big_io
00:29:16.855  ************************************
00:29:16.855   00:02:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:29:16.855  [2024-12-14 00:02:47.224409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:16.855  [2024-12-14 00:02:47.224602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138302 ]
00:29:16.855  [2024-12-14 00:02:47.397466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:16.855  [2024-12-14 00:02:47.575248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:29:16.855  [2024-12-14 00:02:47.575266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:17.423  Running I/O for 5 seconds...
00:29:22.697  
00:29:22.697                                                                                                  Latency(us)
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:29:22.697  	 Verification LBA range: start 0x0 length 0x4ff8
00:29:22.697  	 Nvme0n1p1           :       5.06    1396.04      87.25       0.00     0.00   90879.71    3306.59  130595.37
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:29:22.697  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:29:22.697  	 Nvme0n1p1           :       5.07    1051.26      65.70       0.00     0.00  120506.37    2323.55  176351.42
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:29:22.697  	 Verification LBA range: start 0x0 length 0x4ff7
00:29:22.697  	 Nvme0n1p2           :       5.07    1402.53      87.66       0.00     0.00   89815.77     845.27   98661.47
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:29:22.697  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:29:22.697  	 Nvme0n1p2           :       5.08    1059.30      66.21       0.00     0.00  118219.49     815.48  129642.12
00:29:22.697  
[2024-12-14T00:02:53.429Z]  ===================================================================================================================
00:29:22.697  
[2024-12-14T00:02:53.429Z]  Total                       :               4909.13     306.82       0.00     0.00  102836.62     815.48  176351.42
00:29:24.073  
00:29:24.073  real	0m7.453s
00:29:24.073  user	0m13.735s
00:29:24.073  sys	0m0.277s
00:29:24.073   00:02:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:24.073   00:02:54	-- common/autotest_common.sh@10 -- # set +x
00:29:24.073  ************************************
00:29:24.073  END TEST bdev_verify_big_io
00:29:24.073  ************************************
00:29:24.073   00:02:54	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:24.073   00:02:54	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:29:24.073   00:02:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:24.073   00:02:54	-- common/autotest_common.sh@10 -- # set +x
00:29:24.073  ************************************
00:29:24.073  START TEST bdev_write_zeroes
00:29:24.073  ************************************
00:29:24.073   00:02:54	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:24.073  [2024-12-14 00:02:54.736672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:24.073  [2024-12-14 00:02:54.736868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138404 ]
00:29:24.332  [2024-12-14 00:02:54.903934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:24.591  [2024-12-14 00:02:55.089658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:24.849  Running I/O for 1 seconds...
00:29:25.781  
00:29:25.781                                                                                                  Latency(us)
00:29:25.781  
[2024-12-14T00:02:56.513Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:25.781  
[2024-12-14T00:02:56.513Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:29:25.781  	 Nvme0n1p1           :       1.01   28491.49     111.29       0.00     0.00    4483.34    2189.50   14060.45
00:29:25.781  
[2024-12-14T00:02:56.513Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:29:25.781  	 Nvme0n1p2           :       1.01   28538.06     111.48       0.00     0.00    4469.79    2234.18   11915.64
00:29:25.781  
[2024-12-14T00:02:56.513Z]  ===================================================================================================================
00:29:25.781  
[2024-12-14T00:02:56.513Z]  Total                       :              57029.55     222.77       0.00     0.00    4476.56    2189.50   14060.45
00:29:27.156  
00:29:27.156  real	0m2.848s
00:29:27.156  user	0m2.452s
00:29:27.156  sys	0m0.296s
00:29:27.156   00:02:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:27.156  ************************************
00:29:27.156   00:02:57	-- common/autotest_common.sh@10 -- # set +x
00:29:27.156  END TEST bdev_write_zeroes
00:29:27.156  ************************************
00:29:27.156   00:02:57	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:27.156   00:02:57	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:29:27.156   00:02:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:27.156   00:02:57	-- common/autotest_common.sh@10 -- # set +x
00:29:27.156  ************************************
00:29:27.156  START TEST bdev_json_nonenclosed
00:29:27.156  ************************************
00:29:27.156   00:02:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:27.156  [2024-12-14 00:02:57.618653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:27.156  [2024-12-14 00:02:57.618963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138467 ]
00:29:27.156  [2024-12-14 00:02:57.771223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:27.414  [2024-12-14 00:02:57.949685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:27.414  [2024-12-14 00:02:57.950165] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:29:27.414  [2024-12-14 00:02:57.950329] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:29:27.673  
00:29:27.673  real	0m0.716s
00:29:27.673  user	0m0.484s
00:29:27.673  sys	0m0.132s
00:29:27.673   00:02:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:27.673   00:02:58	-- common/autotest_common.sh@10 -- # set +x
00:29:27.673  ************************************
00:29:27.673  END TEST bdev_json_nonenclosed
00:29:27.673  ************************************
00:29:27.673   00:02:58	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:27.673   00:02:58	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:29:27.673   00:02:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:27.673   00:02:58	-- common/autotest_common.sh@10 -- # set +x
00:29:27.673  ************************************
00:29:27.673  START TEST bdev_json_nonarray
00:29:27.673  ************************************
00:29:27.673   00:02:58	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:29:27.673  [2024-12-14 00:02:58.393796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:27.673  [2024-12-14 00:02:58.393992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138499 ]
00:29:27.932  [2024-12-14 00:02:58.562768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:28.190  [2024-12-14 00:02:58.752066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:28.190  [2024-12-14 00:02:58.752558] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:29:28.190  [2024-12-14 00:02:58.752702] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:29:28.449  
00:29:28.449  real	0m0.756s
00:29:28.449  user	0m0.507s
00:29:28.449  sys	0m0.148s
00:29:28.449   00:02:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:28.449   00:02:59	-- common/autotest_common.sh@10 -- # set +x
00:29:28.449  ************************************
00:29:28.449  END TEST bdev_json_nonarray
00:29:28.449  ************************************
00:29:28.449   00:02:59	-- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]]
00:29:28.449   00:02:59	-- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]]
00:29:28.449   00:02:59	-- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:29:28.449   00:02:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:28.449   00:02:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:28.449   00:02:59	-- common/autotest_common.sh@10 -- # set +x
00:29:28.449  ************************************
00:29:28.449  START TEST bdev_gpt_uuid
00:29:28.449  ************************************
00:29:28.449   00:02:59	-- common/autotest_common.sh@1114 -- # bdev_gpt_uuid
00:29:28.449   00:02:59	-- bdev/blockdev.sh@612 -- # local bdev
00:29:28.449   00:02:59	-- bdev/blockdev.sh@614 -- # start_spdk_tgt
00:29:28.449   00:02:59	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=138528
00:29:28.449   00:02:59	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:29:28.449   00:02:59	-- bdev/blockdev.sh@47 -- # waitforlisten 138528
00:29:28.449   00:02:59	-- common/autotest_common.sh@829 -- # '[' -z 138528 ']'
00:29:28.449   00:02:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:28.449   00:02:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:29:28.449   00:02:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:28.449  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:28.449   00:02:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:29:28.449   00:02:59	-- common/autotest_common.sh@10 -- # set +x
00:29:28.449   00:02:59	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:29:28.708  [2024-12-14 00:02:59.212140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:28.708  [2024-12-14 00:02:59.212505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138528 ]
00:29:28.708  [2024-12-14 00:02:59.366387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:28.967  [2024-12-14 00:02:59.549002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:29:28.967  [2024-12-14 00:02:59.549522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:29:30.452   00:03:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:29:30.452   00:03:00	-- common/autotest_common.sh@862 -- # return 0
00:29:30.452   00:03:00	-- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:29:30.452   00:03:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:29:30.452   00:03:00	-- common/autotest_common.sh@10 -- # set +x
00:29:30.452  Some configs were skipped because the RPC state that can call them passed over.
00:29:30.452   00:03:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:29:30.452   00:03:00	-- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine
00:29:30.452   00:03:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:29:30.452   00:03:00	-- common/autotest_common.sh@10 -- # set +x
00:29:30.452   00:03:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:29:30.452    00:03:00	-- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:29:30.452    00:03:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:29:30.452    00:03:00	-- common/autotest_common.sh@10 -- # set +x
00:29:30.452    00:03:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:29:30.452   00:03:00	-- bdev/blockdev.sh@619 -- # bdev='[
00:29:30.452  {
00:29:30.452  "name": "Nvme0n1p1",
00:29:30.452  "aliases": [
00:29:30.452  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:29:30.452  ],
00:29:30.452  "product_name": "GPT Disk",
00:29:30.452  "block_size": 4096,
00:29:30.452  "num_blocks": 655104,
00:29:30.452  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:29:30.452  "assigned_rate_limits": {
00:29:30.452  "rw_ios_per_sec": 0,
00:29:30.452  "rw_mbytes_per_sec": 0,
00:29:30.452  "r_mbytes_per_sec": 0,
00:29:30.452  "w_mbytes_per_sec": 0
00:29:30.452  },
00:29:30.452  "claimed": false,
00:29:30.452  "zoned": false,
00:29:30.452  "supported_io_types": {
00:29:30.452  "read": true,
00:29:30.452  "write": true,
00:29:30.452  "unmap": true,
00:29:30.452  "write_zeroes": true,
00:29:30.452  "flush": true,
00:29:30.452  "reset": true,
00:29:30.452  "compare": true,
00:29:30.452  "compare_and_write": false,
00:29:30.452  "abort": true,
00:29:30.452  "nvme_admin": false,
00:29:30.452  "nvme_io": false
00:29:30.452  },
00:29:30.452  "driver_specific": {
00:29:30.452  "gpt": {
00:29:30.452  "base_bdev": "Nvme0n1",
00:29:30.452  "offset_blocks": 256,
00:29:30.452  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:29:30.452  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:29:30.452  "partition_name": "SPDK_TEST_first"
00:29:30.452  }
00:29:30.452  }
00:29:30.452  }
00:29:30.452  ]'
00:29:30.452    00:03:00	-- bdev/blockdev.sh@620 -- # jq -r length
00:29:30.452   00:03:01	-- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]]
00:29:30.452    00:03:01	-- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]'
00:29:30.452   00:03:01	-- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:29:30.452    00:03:01	-- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:29:30.452   00:03:01	-- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:29:30.452    00:03:01	-- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:29:30.452    00:03:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:29:30.452    00:03:01	-- common/autotest_common.sh@10 -- # set +x
00:29:30.452    00:03:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:29:30.452   00:03:01	-- bdev/blockdev.sh@624 -- # bdev='[
00:29:30.452  {
00:29:30.452  "name": "Nvme0n1p2",
00:29:30.452  "aliases": [
00:29:30.452  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:29:30.452  ],
00:29:30.452  "product_name": "GPT Disk",
00:29:30.452  "block_size": 4096,
00:29:30.452  "num_blocks": 655103,
00:29:30.452  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:29:30.452  "assigned_rate_limits": {
00:29:30.452  "rw_ios_per_sec": 0,
00:29:30.452  "rw_mbytes_per_sec": 0,
00:29:30.452  "r_mbytes_per_sec": 0,
00:29:30.452  "w_mbytes_per_sec": 0
00:29:30.452  },
00:29:30.452  "claimed": false,
00:29:30.452  "zoned": false,
00:29:30.452  "supported_io_types": {
00:29:30.452  "read": true,
00:29:30.452  "write": true,
00:29:30.452  "unmap": true,
00:29:30.452  "write_zeroes": true,
00:29:30.452  "flush": true,
00:29:30.452  "reset": true,
00:29:30.452  "compare": true,
00:29:30.452  "compare_and_write": false,
00:29:30.452  "abort": true,
00:29:30.452  "nvme_admin": false,
00:29:30.452  "nvme_io": false
00:29:30.452  },
00:29:30.452  "driver_specific": {
00:29:30.452  "gpt": {
00:29:30.452  "base_bdev": "Nvme0n1",
00:29:30.452  "offset_blocks": 655360,
00:29:30.452  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:29:30.452  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:29:30.452  "partition_name": "SPDK_TEST_second"
00:29:30.452  }
00:29:30.452  }
00:29:30.452  }
00:29:30.452  ]'
00:29:30.452    00:03:01	-- bdev/blockdev.sh@625 -- # jq -r length
00:29:30.452   00:03:01	-- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]]
00:29:30.452    00:03:01	-- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]'
00:29:30.711   00:03:01	-- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:29:30.711    00:03:01	-- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:29:30.711   00:03:01	-- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:29:30.711   00:03:01	-- bdev/blockdev.sh@629 -- # killprocess 138528
00:29:30.711   00:03:01	-- common/autotest_common.sh@936 -- # '[' -z 138528 ']'
00:29:30.711   00:03:01	-- common/autotest_common.sh@940 -- # kill -0 138528
00:29:30.711    00:03:01	-- common/autotest_common.sh@941 -- # uname
00:29:30.711   00:03:01	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:29:30.711    00:03:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138528
00:29:30.711   00:03:01	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:29:30.711   00:03:01	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:29:30.711   00:03:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 138528'
00:29:30.711  killing process with pid 138528
00:29:30.711   00:03:01	-- common/autotest_common.sh@955 -- # kill 138528
00:29:30.711   00:03:01	-- common/autotest_common.sh@960 -- # wait 138528
00:29:32.614  
00:29:32.614  real	0m4.055s
00:29:32.614  user	0m4.371s
00:29:32.614  sys	0m0.555s
00:29:32.614  ************************************
00:29:32.614  END TEST bdev_gpt_uuid
00:29:32.614  ************************************
00:29:32.614   00:03:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:32.614   00:03:03	-- common/autotest_common.sh@10 -- # set +x
00:29:32.614   00:03:03	-- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]]
00:29:32.614   00:03:03	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:29:32.614   00:03:03	-- bdev/blockdev.sh@809 -- # cleanup
00:29:32.614   00:03:03	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:29:32.614   00:03:03	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:29:32.614   00:03:03	-- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]]
00:29:32.614   00:03:03	-- bdev/blockdev.sh@28 -- # [[ gpt == daos ]]
00:29:32.614   00:03:03	-- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]]
00:29:32.614   00:03:03	-- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:29:32.873  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:29:32.873  Waiting for block devices as requested
00:29:33.132  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:29:33.132   00:03:03	-- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]]
00:29:33.132   00:03:03	-- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1
00:29:33.132  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:29:33.132  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:29:33.132  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:29:33.132  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:29:33.132   00:03:03	-- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]]
00:29:33.132  
00:29:33.132  real	0m44.471s
00:29:33.132  user	1m2.024s
00:29:33.132  sys	0m6.404s
00:29:33.132   00:03:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:33.132   00:03:03	-- common/autotest_common.sh@10 -- # set +x
00:29:33.132  ************************************
00:29:33.132  END TEST blockdev_nvme_gpt
00:29:33.132  ************************************
00:29:33.132   00:03:03	-- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:29:33.132   00:03:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:33.132   00:03:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:33.132   00:03:03	-- common/autotest_common.sh@10 -- # set +x
00:29:33.132  ************************************
00:29:33.132  START TEST nvme
00:29:33.132  ************************************
00:29:33.132   00:03:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:29:33.132  * Looking for test storage...
00:29:33.391  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:29:33.392    00:03:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:29:33.392     00:03:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:29:33.392     00:03:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:29:33.392    00:03:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:29:33.392    00:03:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:29:33.392    00:03:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:29:33.392    00:03:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:29:33.392    00:03:03	-- scripts/common.sh@335 -- # IFS=.-:
00:29:33.392    00:03:03	-- scripts/common.sh@335 -- # read -ra ver1
00:29:33.392    00:03:03	-- scripts/common.sh@336 -- # IFS=.-:
00:29:33.392    00:03:03	-- scripts/common.sh@336 -- # read -ra ver2
00:29:33.392    00:03:03	-- scripts/common.sh@337 -- # local 'op=<'
00:29:33.392    00:03:03	-- scripts/common.sh@339 -- # ver1_l=2
00:29:33.392    00:03:03	-- scripts/common.sh@340 -- # ver2_l=1
00:29:33.392    00:03:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:29:33.392    00:03:03	-- scripts/common.sh@343 -- # case "$op" in
00:29:33.392    00:03:03	-- scripts/common.sh@344 -- # : 1
00:29:33.392    00:03:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:29:33.392    00:03:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:33.392     00:03:03	-- scripts/common.sh@364 -- # decimal 1
00:29:33.392     00:03:03	-- scripts/common.sh@352 -- # local d=1
00:29:33.392     00:03:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:33.392     00:03:03	-- scripts/common.sh@354 -- # echo 1
00:29:33.392    00:03:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:29:33.392     00:03:03	-- scripts/common.sh@365 -- # decimal 2
00:29:33.392     00:03:03	-- scripts/common.sh@352 -- # local d=2
00:29:33.392     00:03:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:33.392     00:03:03	-- scripts/common.sh@354 -- # echo 2
00:29:33.392    00:03:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:29:33.392    00:03:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:29:33.392    00:03:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:29:33.392    00:03:03	-- scripts/common.sh@367 -- # return 0
00:29:33.392    00:03:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:33.392    00:03:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:29:33.392  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.392  		--rc genhtml_branch_coverage=1
00:29:33.392  		--rc genhtml_function_coverage=1
00:29:33.392  		--rc genhtml_legend=1
00:29:33.392  		--rc geninfo_all_blocks=1
00:29:33.392  		--rc geninfo_unexecuted_blocks=1
00:29:33.392  		
00:29:33.392  		'
00:29:33.392    00:03:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:29:33.392  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.392  		--rc genhtml_branch_coverage=1
00:29:33.392  		--rc genhtml_function_coverage=1
00:29:33.392  		--rc genhtml_legend=1
00:29:33.392  		--rc geninfo_all_blocks=1
00:29:33.392  		--rc geninfo_unexecuted_blocks=1
00:29:33.392  		
00:29:33.392  		'
00:29:33.392    00:03:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:29:33.392  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.392  		--rc genhtml_branch_coverage=1
00:29:33.392  		--rc genhtml_function_coverage=1
00:29:33.392  		--rc genhtml_legend=1
00:29:33.392  		--rc geninfo_all_blocks=1
00:29:33.392  		--rc geninfo_unexecuted_blocks=1
00:29:33.392  		
00:29:33.392  		'
00:29:33.392    00:03:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:29:33.392  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:33.392  		--rc genhtml_branch_coverage=1
00:29:33.392  		--rc genhtml_function_coverage=1
00:29:33.392  		--rc genhtml_legend=1
00:29:33.392  		--rc geninfo_all_blocks=1
00:29:33.392  		--rc geninfo_unexecuted_blocks=1
00:29:33.392  		
00:29:33.392  		'
00:29:33.392   00:03:03	-- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:29:33.651  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:29:33.910  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:29:34.847    00:03:05	-- nvme/nvme.sh@79 -- # uname
00:29:34.847   00:03:05	-- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:29:34.847   00:03:05	-- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:29:34.847   00:03:05	-- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:29:34.847   00:03:05	-- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:29:34.847   00:03:05	-- common/autotest_common.sh@1054 -- # _randomize_va_space=2
00:29:34.847   00:03:05	-- common/autotest_common.sh@1055 -- # echo 0
00:29:34.847   00:03:05	-- common/autotest_common.sh@1057 -- # stubpid=138961
00:29:34.847  Waiting for stub to ready for secondary processes...
00:29:34.847   00:03:05	-- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:29:34.847   00:03:05	-- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes...
00:29:34.847   00:03:05	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:29:34.847   00:03:05	-- common/autotest_common.sh@1061 -- # [[ -e /proc/138961 ]]
00:29:34.847   00:03:05	-- common/autotest_common.sh@1062 -- # sleep 1s
00:29:34.847  [2024-12-14 00:03:05.533166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:34.847  [2024-12-14 00:03:05.533311] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:35.784   00:03:06	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:29:35.784   00:03:06	-- common/autotest_common.sh@1061 -- # [[ -e /proc/138961 ]]
00:29:35.784   00:03:06	-- common/autotest_common.sh@1062 -- # sleep 1s
00:29:36.043  [2024-12-14 00:03:06.737206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:29:36.302  [2024-12-14 00:03:06.952048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:29:36.302  [2024-12-14 00:03:06.952220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:29:36.302  [2024-12-14 00:03:06.952230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:29:36.302  [2024-12-14 00:03:06.965802] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:29:36.302  [2024-12-14 00:03:06.976628] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:29:36.302  [2024-12-14 00:03:06.977310] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:29:36.870   00:03:07	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:29:36.870  done.
00:29:36.870   00:03:07	-- common/autotest_common.sh@1064 -- # echo done.
00:29:36.870   00:03:07	-- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:29:36.870   00:03:07	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:29:36.870   00:03:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:36.870   00:03:07	-- common/autotest_common.sh@10 -- # set +x
00:29:36.870  ************************************
00:29:36.870  START TEST nvme_reset
00:29:36.870  ************************************
00:29:36.870   00:03:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:29:37.128  Initializing NVMe Controllers
00:29:37.128  Skipping QEMU NVMe SSD at 0000:00:06.0
00:29:37.128  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:29:37.128  
00:29:37.128  real	0m0.321s
00:29:37.128  user	0m0.123s
00:29:37.128  sys	0m0.118s
00:29:37.128   00:03:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:37.128   00:03:07	-- common/autotest_common.sh@10 -- # set +x
00:29:37.128  ************************************
00:29:37.128  END TEST nvme_reset
00:29:37.128  ************************************
00:29:37.386   00:03:07	-- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:29:37.386   00:03:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:37.386   00:03:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:37.386   00:03:07	-- common/autotest_common.sh@10 -- # set +x
00:29:37.386  ************************************
00:29:37.386  START TEST nvme_identify
00:29:37.386  ************************************
00:29:37.386   00:03:07	-- common/autotest_common.sh@1114 -- # nvme_identify
00:29:37.386   00:03:07	-- nvme/nvme.sh@12 -- # bdfs=()
00:29:37.386   00:03:07	-- nvme/nvme.sh@12 -- # local bdfs bdf
00:29:37.386   00:03:07	-- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:29:37.386    00:03:07	-- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:29:37.386    00:03:07	-- common/autotest_common.sh@1508 -- # bdfs=()
00:29:37.386    00:03:07	-- common/autotest_common.sh@1508 -- # local bdfs
00:29:37.386    00:03:07	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:29:37.386     00:03:07	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:29:37.386     00:03:07	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:29:37.386    00:03:07	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:29:37.386    00:03:07	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:29:37.386   00:03:07	-- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:29:37.646  [2024-12-14 00:03:08.185140] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 138994 terminated unexpected
00:29:37.646  =====================================================
00:29:37.646  NVMe Controller at 0000:00:06.0 [1b36:0010]
00:29:37.646  =====================================================
00:29:37.646  Controller Capabilities/Features
00:29:37.646  ================================
00:29:37.646  Vendor ID:                             1b36
00:29:37.646  Subsystem Vendor ID:                   1af4
00:29:37.646  Serial Number:                         12340
00:29:37.646  Model Number:                          QEMU NVMe Ctrl
00:29:37.646  Firmware Version:                      8.0.0
00:29:37.646  Recommended Arb Burst:                 6
00:29:37.646  IEEE OUI Identifier:                   00 54 52
00:29:37.646  Multi-path I/O
00:29:37.646    May have multiple subsystem ports:   No
00:29:37.646    May have multiple controllers:       No
00:29:37.646    Associated with SR-IOV VF:           No
00:29:37.646  Max Data Transfer Size:                524288
00:29:37.646  Max Number of Namespaces:              256
00:29:37.646  Max Number of I/O Queues:              64
00:29:37.646  NVMe Specification Version (VS):       1.4
00:29:37.646  NVMe Specification Version (Identify): 1.4
00:29:37.646  Maximum Queue Entries:                 2048
00:29:37.646  Contiguous Queues Required:            Yes
00:29:37.646  Arbitration Mechanisms Supported
00:29:37.646    Weighted Round Robin:                Not Supported
00:29:37.646    Vendor Specific:                     Not Supported
00:29:37.646  Reset Timeout:                         7500 ms
00:29:37.646  Doorbell Stride:                       4 bytes
00:29:37.646  NVM Subsystem Reset:                   Not Supported
00:29:37.646  Command Sets Supported
00:29:37.646    NVM Command Set:                     Supported
00:29:37.646  Boot Partition:                        Not Supported
00:29:37.646  Memory Page Size Minimum:              4096 bytes
00:29:37.646  Memory Page Size Maximum:              65536 bytes
00:29:37.646  Persistent Memory Region:              Not Supported
00:29:37.646  Optional Asynchronous Events Supported
00:29:37.646    Namespace Attribute Notices:         Supported
00:29:37.646    Firmware Activation Notices:         Not Supported
00:29:37.646    ANA Change Notices:                  Not Supported
00:29:37.646    PLE Aggregate Log Change Notices:    Not Supported
00:29:37.646    LBA Status Info Alert Notices:       Not Supported
00:29:37.646    EGE Aggregate Log Change Notices:    Not Supported
00:29:37.646    Normal NVM Subsystem Shutdown event: Not Supported
00:29:37.646    Zone Descriptor Change Notices:      Not Supported
00:29:37.646    Discovery Log Change Notices:        Not Supported
00:29:37.646  Controller Attributes
00:29:37.646    128-bit Host Identifier:             Not Supported
00:29:37.646    Non-Operational Permissive Mode:     Not Supported
00:29:37.646    NVM Sets:                            Not Supported
00:29:37.646    Read Recovery Levels:                Not Supported
00:29:37.646    Endurance Groups:                    Not Supported
00:29:37.646    Predictable Latency Mode:            Not Supported
00:29:37.646    Traffic Based Keep ALive:            Not Supported
00:29:37.646    Namespace Granularity:               Not Supported
00:29:37.646    SQ Associations:                     Not Supported
00:29:37.646    UUID List:                           Not Supported
00:29:37.646    Multi-Domain Subsystem:              Not Supported
00:29:37.646    Fixed Capacity Management:           Not Supported
00:29:37.646    Variable Capacity Management:        Not Supported
00:29:37.646    Delete Endurance Group:              Not Supported
00:29:37.646    Delete NVM Set:                      Not Supported
00:29:37.646    Extended LBA Formats Supported:      Supported
00:29:37.646    Flexible Data Placement Supported:   Not Supported
00:29:37.646  
00:29:37.646  Controller Memory Buffer Support
00:29:37.646  ================================
00:29:37.646  Supported:                             No
00:29:37.646  
00:29:37.646  Persistent Memory Region Support
00:29:37.646  ================================
00:29:37.646  Supported:                             No
00:29:37.646  
00:29:37.646  Admin Command Set Attributes
00:29:37.646  ============================
00:29:37.646  Security Send/Receive:                 Not Supported
00:29:37.646  Format NVM:                            Supported
00:29:37.646  Firmware Activate/Download:            Not Supported
00:29:37.646  Namespace Management:                  Supported
00:29:37.646  Device Self-Test:                      Not Supported
00:29:37.646  Directives:                            Supported
00:29:37.646  NVMe-MI:                               Not Supported
00:29:37.646  Virtualization Management:             Not Supported
00:29:37.646  Doorbell Buffer Config:                Supported
00:29:37.646  Get LBA Status Capability:             Not Supported
00:29:37.646  Command & Feature Lockdown Capability: Not Supported
00:29:37.646  Abort Command Limit:                   4
00:29:37.646  Async Event Request Limit:             4
00:29:37.646  Number of Firmware Slots:              N/A
00:29:37.646  Firmware Slot 1 Read-Only:             N/A
00:29:37.646  Firmware Activation Without Reset:     N/A
00:29:37.646  Multiple Update Detection Support:     N/A
00:29:37.646  Firmware Update Granularity:           No Information Provided
00:29:37.646  Per-Namespace SMART Log:               Yes
00:29:37.646  Asymmetric Namespace Access Log Page:  Not Supported
00:29:37.646  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:29:37.646  Command Effects Log Page:              Supported
00:29:37.646  Get Log Page Extended Data:            Supported
00:29:37.646  Telemetry Log Pages:                   Not Supported
00:29:37.646  Persistent Event Log Pages:            Not Supported
00:29:37.646  Supported Log Pages Log Page:          May Support
00:29:37.646  Commands Supported & Effects Log Page: Not Supported
00:29:37.646  Feature Identifiers & Effects Log Page:May Support
00:29:37.646  NVMe-MI Commands & Effects Log Page:   May Support
00:29:37.646  Data Area 4 for Telemetry Log:         Not Supported
00:29:37.646  Error Log Page Entries Supported:      1
00:29:37.647  Keep Alive:                            Not Supported
00:29:37.647  
00:29:37.647  NVM Command Set Attributes
00:29:37.647  ==========================
00:29:37.647  Submission Queue Entry Size
00:29:37.647    Max:                       64
00:29:37.647    Min:                       64
00:29:37.647  Completion Queue Entry Size
00:29:37.647    Max:                       16
00:29:37.647    Min:                       16
00:29:37.647  Number of Namespaces:        256
00:29:37.647  Compare Command:             Supported
00:29:37.647  Write Uncorrectable Command: Not Supported
00:29:37.647  Dataset Management Command:  Supported
00:29:37.647  Write Zeroes Command:        Supported
00:29:37.647  Set Features Save Field:     Supported
00:29:37.647  Reservations:                Not Supported
00:29:37.647  Timestamp:                   Supported
00:29:37.647  Copy:                        Supported
00:29:37.647  Volatile Write Cache:        Present
00:29:37.647  Atomic Write Unit (Normal):  1
00:29:37.647  Atomic Write Unit (PFail):   1
00:29:37.647  Atomic Compare & Write Unit: 1
00:29:37.647  Fused Compare & Write:       Not Supported
00:29:37.647  Scatter-Gather List
00:29:37.647    SGL Command Set:           Supported
00:29:37.647    SGL Keyed:                 Not Supported
00:29:37.647    SGL Bit Bucket Descriptor: Not Supported
00:29:37.647    SGL Metadata Pointer:      Not Supported
00:29:37.647    Oversized SGL:             Not Supported
00:29:37.647    SGL Metadata Address:      Not Supported
00:29:37.647    SGL Offset:                Not Supported
00:29:37.647    Transport SGL Data Block:  Not Supported
00:29:37.647  Replay Protected Memory Block:  Not Supported
00:29:37.647  
00:29:37.647  Firmware Slot Information
00:29:37.647  =========================
00:29:37.647  Active slot:                 1
00:29:37.647  Slot 1 Firmware Revision:    1.0
00:29:37.647  
00:29:37.647  
00:29:37.647  Commands Supported and Effects
00:29:37.647  ==============================
00:29:37.647  Admin Commands
00:29:37.647  --------------
00:29:37.647     Delete I/O Submission Queue (00h): Supported 
00:29:37.647     Create I/O Submission Queue (01h): Supported 
00:29:37.647                    Get Log Page (02h): Supported 
00:29:37.647     Delete I/O Completion Queue (04h): Supported 
00:29:37.647     Create I/O Completion Queue (05h): Supported 
00:29:37.647                        Identify (06h): Supported 
00:29:37.647                           Abort (08h): Supported 
00:29:37.647                    Set Features (09h): Supported 
00:29:37.647                    Get Features (0Ah): Supported 
00:29:37.647      Asynchronous Event Request (0Ch): Supported 
00:29:37.647            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:29:37.647                  Directive Send (19h): Supported 
00:29:37.647               Directive Receive (1Ah): Supported 
00:29:37.647       Virtualization Management (1Ch): Supported 
00:29:37.647          Doorbell Buffer Config (7Ch): Supported 
00:29:37.647                      Format NVM (80h): Supported LBA-Change 
00:29:37.647  I/O Commands
00:29:37.647  ------------
00:29:37.647                           Flush (00h): Supported LBA-Change 
00:29:37.647                           Write (01h): Supported LBA-Change 
00:29:37.647                            Read (02h): Supported 
00:29:37.647                         Compare (05h): Supported 
00:29:37.647                    Write Zeroes (08h): Supported LBA-Change 
00:29:37.647              Dataset Management (09h): Supported LBA-Change 
00:29:37.647                         Unknown (0Ch): Supported 
00:29:37.647                         Unknown (12h): Supported 
00:29:37.647                            Copy (19h): Supported LBA-Change 
00:29:37.647                         Unknown (1Dh): Supported LBA-Change 
00:29:37.647  
00:29:37.647  Error Log
00:29:37.647  =========
00:29:37.647  
00:29:37.647  Arbitration
00:29:37.647  ===========
00:29:37.647  Arbitration Burst:           no limit
00:29:37.647  
00:29:37.647  Power Management
00:29:37.647  ================
00:29:37.647  Number of Power States:          1
00:29:37.647  Current Power State:             Power State #0
00:29:37.647  Power State #0:
00:29:37.647    Max Power:                     25.00 W
00:29:37.647    Non-Operational State:         Operational
00:29:37.647    Entry Latency:                 16 microseconds
00:29:37.647    Exit Latency:                  4 microseconds
00:29:37.647    Relative Read Throughput:      0
00:29:37.647    Relative Read Latency:         0
00:29:37.647    Relative Write Throughput:     0
00:29:37.647    Relative Write Latency:        0
00:29:37.647    Idle Power:                     Not Reported
00:29:37.647    Active Power:                   Not Reported
00:29:37.647  Non-Operational Permissive Mode: Not Supported
00:29:37.647  
00:29:37.647  Health Information
00:29:37.647  ==================
00:29:37.647  Critical Warnings:
00:29:37.647    Available Spare Space:     OK
00:29:37.647    Temperature:               OK
00:29:37.647    Device Reliability:        OK
00:29:37.647    Read Only:                 No
00:29:37.647    Volatile Memory Backup:    OK
00:29:37.647  Current Temperature:         323 Kelvin (50 Celsius)
00:29:37.647  Temperature Threshold:       343 Kelvin (70 Celsius)
00:29:37.647  Available Spare:             0%
00:29:37.647  Available Spare Threshold:   0%
00:29:37.647  Life Percentage Used:        0%
00:29:37.647  Data Units Read:             8585
00:29:37.647  Data Units Written:          4199
00:29:37.647  Host Read Commands:          298510
00:29:37.647  Host Write Commands:         164499
00:29:37.647  Controller Busy Time:        0 minutes
00:29:37.647  Power Cycles:                0
00:29:37.647  Power On Hours:              0 hours
00:29:37.647  Unsafe Shutdowns:            0
00:29:37.647  Unrecoverable Media Errors:  0
00:29:37.647  Lifetime Error Log Entries:  0
00:29:37.647  Warning Temperature Time:    0 minutes
00:29:37.647  Critical Temperature Time:   0 minutes
00:29:37.647  
00:29:37.647  Number of Queues
00:29:37.647  ================
00:29:37.647  Number of I/O Submission Queues:      64
00:29:37.647  Number of I/O Completion Queues:      64
00:29:37.647  
00:29:37.647  ZNS Specific Controller Data
00:29:37.647  ============================
00:29:37.647  Zone Append Size Limit:      0
00:29:37.647  
00:29:37.647  
00:29:37.647  Active Namespaces
00:29:37.647  =================
00:29:37.647  Namespace ID:1
00:29:37.647  Error Recovery Timeout:                Unlimited
00:29:37.647  Command Set Identifier:                NVM (00h)
00:29:37.647  Deallocate:                            Supported
00:29:37.647  Deallocated/Unwritten Error:           Supported
00:29:37.647  Deallocated Read Value:                All 0x00
00:29:37.647  Deallocate in Write Zeroes:            Not Supported
00:29:37.647  Deallocated Guard Field:               0xFFFF
00:29:37.647  Flush:                                 Supported
00:29:37.647  Reservation:                           Not Supported
00:29:37.647  Namespace Sharing Capabilities:        Private
00:29:37.647  Size (in LBAs):                        1310720 (5GiB)
00:29:37.647  Capacity (in LBAs):                    1310720 (5GiB)
00:29:37.647  Utilization (in LBAs):                 1310720 (5GiB)
00:29:37.647  Thin Provisioning:                     Not Supported
00:29:37.647  Per-NS Atomic Units:                   No
00:29:37.647  Maximum Single Source Range Length:    128
00:29:37.647  Maximum Copy Length:                   128
00:29:37.647  Maximum Source Range Count:            128
00:29:37.647  NGUID/EUI64 Never Reused:              No
00:29:37.647  Namespace Write Protected:             No
00:29:37.647  Number of LBA Formats:                 8
00:29:37.647  Current LBA Format:                    LBA Format #04
00:29:37.647  LBA Format #00: Data Size:   512  Metadata Size:     0
00:29:37.647  LBA Format #01: Data Size:   512  Metadata Size:     8
00:29:37.647  LBA Format #02: Data Size:   512  Metadata Size:    16
00:29:37.647  LBA Format #03: Data Size:   512  Metadata Size:    64
00:29:37.647  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:29:37.647  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:29:37.647  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:29:37.647  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:29:37.647  
00:29:37.647   00:03:08	-- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:29:37.647   00:03:08	-- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0
00:29:37.907  =====================================================
00:29:37.907  NVMe Controller at 0000:00:06.0 [1b36:0010]
00:29:37.907  =====================================================
00:29:37.907  Controller Capabilities/Features
00:29:37.907  ================================
00:29:37.907  Vendor ID:                             1b36
00:29:37.907  Subsystem Vendor ID:                   1af4
00:29:37.907  Serial Number:                         12340
00:29:37.907  Model Number:                          QEMU NVMe Ctrl
00:29:37.907  Firmware Version:                      8.0.0
00:29:37.907  Recommended Arb Burst:                 6
00:29:37.907  IEEE OUI Identifier:                   00 54 52
00:29:37.907  Multi-path I/O
00:29:37.907    May have multiple subsystem ports:   No
00:29:37.907    May have multiple controllers:       No
00:29:37.907    Associated with SR-IOV VF:           No
00:29:37.907  Max Data Transfer Size:                524288
00:29:37.907  Max Number of Namespaces:              256
00:29:37.907  Max Number of I/O Queues:              64
00:29:37.907  NVMe Specification Version (VS):       1.4
00:29:37.907  NVMe Specification Version (Identify): 1.4
00:29:37.907  Maximum Queue Entries:                 2048
00:29:37.907  Contiguous Queues Required:            Yes
00:29:37.907  Arbitration Mechanisms Supported
00:29:37.907    Weighted Round Robin:                Not Supported
00:29:37.907    Vendor Specific:                     Not Supported
00:29:37.907  Reset Timeout:                         7500 ms
00:29:37.907  Doorbell Stride:                       4 bytes
00:29:37.907  NVM Subsystem Reset:                   Not Supported
00:29:37.907  Command Sets Supported
00:29:37.907    NVM Command Set:                     Supported
00:29:37.907  Boot Partition:                        Not Supported
00:29:37.907  Memory Page Size Minimum:              4096 bytes
00:29:37.907  Memory Page Size Maximum:              65536 bytes
00:29:37.907  Persistent Memory Region:              Not Supported
00:29:37.907  Optional Asynchronous Events Supported
00:29:37.907    Namespace Attribute Notices:         Supported
00:29:37.907    Firmware Activation Notices:         Not Supported
00:29:37.907    ANA Change Notices:                  Not Supported
00:29:37.907    PLE Aggregate Log Change Notices:    Not Supported
00:29:37.907    LBA Status Info Alert Notices:       Not Supported
00:29:37.907    EGE Aggregate Log Change Notices:    Not Supported
00:29:37.907    Normal NVM Subsystem Shutdown event: Not Supported
00:29:37.907    Zone Descriptor Change Notices:      Not Supported
00:29:37.907    Discovery Log Change Notices:        Not Supported
00:29:37.907  Controller Attributes
00:29:37.907    128-bit Host Identifier:             Not Supported
00:29:37.907    Non-Operational Permissive Mode:     Not Supported
00:29:37.907    NVM Sets:                            Not Supported
00:29:37.907    Read Recovery Levels:                Not Supported
00:29:37.907    Endurance Groups:                    Not Supported
00:29:37.907    Predictable Latency Mode:            Not Supported
00:29:37.907    Traffic Based Keep ALive:            Not Supported
00:29:37.907    Namespace Granularity:               Not Supported
00:29:37.907    SQ Associations:                     Not Supported
00:29:37.907    UUID List:                           Not Supported
00:29:37.907    Multi-Domain Subsystem:              Not Supported
00:29:37.907    Fixed Capacity Management:           Not Supported
00:29:37.907    Variable Capacity Management:        Not Supported
00:29:37.907    Delete Endurance Group:              Not Supported
00:29:37.907    Delete NVM Set:                      Not Supported
00:29:37.907    Extended LBA Formats Supported:      Supported
00:29:37.907    Flexible Data Placement Supported:   Not Supported
00:29:37.907  
00:29:37.907  Controller Memory Buffer Support
00:29:37.907  ================================
00:29:37.907  Supported:                             No
00:29:37.907  
00:29:37.907  Persistent Memory Region Support
00:29:37.907  ================================
00:29:37.907  Supported:                             No
00:29:37.907  
00:29:37.907  Admin Command Set Attributes
00:29:37.907  ============================
00:29:37.907  Security Send/Receive:                 Not Supported
00:29:37.907  Format NVM:                            Supported
00:29:37.907  Firmware Activate/Download:            Not Supported
00:29:37.907  Namespace Management:                  Supported
00:29:37.907  Device Self-Test:                      Not Supported
00:29:37.907  Directives:                            Supported
00:29:37.907  NVMe-MI:                               Not Supported
00:29:37.907  Virtualization Management:             Not Supported
00:29:37.907  Doorbell Buffer Config:                Supported
00:29:37.907  Get LBA Status Capability:             Not Supported
00:29:37.907  Command & Feature Lockdown Capability: Not Supported
00:29:37.907  Abort Command Limit:                   4
00:29:37.907  Async Event Request Limit:             4
00:29:37.907  Number of Firmware Slots:              N/A
00:29:37.907  Firmware Slot 1 Read-Only:             N/A
00:29:37.907  Firmware Activation Without Reset:     N/A
00:29:37.907  Multiple Update Detection Support:     N/A
00:29:37.907  Firmware Update Granularity:           No Information Provided
00:29:37.908  Per-Namespace SMART Log:               Yes
00:29:37.908  Asymmetric Namespace Access Log Page:  Not Supported
00:29:37.908  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:29:37.908  Command Effects Log Page:              Supported
00:29:37.908  Get Log Page Extended Data:            Supported
00:29:37.908  Telemetry Log Pages:                   Not Supported
00:29:37.908  Persistent Event Log Pages:            Not Supported
00:29:37.908  Supported Log Pages Log Page:          May Support
00:29:37.908  Commands Supported & Effects Log Page: Not Supported
00:29:37.908  Feature Identifiers & Effects Log Page:May Support
00:29:37.908  NVMe-MI Commands & Effects Log Page:   May Support
00:29:37.908  Data Area 4 for Telemetry Log:         Not Supported
00:29:37.908  Error Log Page Entries Supported:      1
00:29:37.908  Keep Alive:                            Not Supported
00:29:37.908  
00:29:37.908  NVM Command Set Attributes
00:29:37.908  ==========================
00:29:37.908  Submission Queue Entry Size
00:29:37.908    Max:                       64
00:29:37.908    Min:                       64
00:29:37.908  Completion Queue Entry Size
00:29:37.908    Max:                       16
00:29:37.908    Min:                       16
00:29:37.908  Number of Namespaces:        256
00:29:37.908  Compare Command:             Supported
00:29:37.908  Write Uncorrectable Command: Not Supported
00:29:37.908  Dataset Management Command:  Supported
00:29:37.908  Write Zeroes Command:        Supported
00:29:37.908  Set Features Save Field:     Supported
00:29:37.908  Reservations:                Not Supported
00:29:37.908  Timestamp:                   Supported
00:29:37.908  Copy:                        Supported
00:29:37.908  Volatile Write Cache:        Present
00:29:37.908  Atomic Write Unit (Normal):  1
00:29:37.908  Atomic Write Unit (PFail):   1
00:29:37.908  Atomic Compare & Write Unit: 1
00:29:37.908  Fused Compare & Write:       Not Supported
00:29:37.908  Scatter-Gather List
00:29:37.908    SGL Command Set:           Supported
00:29:37.908    SGL Keyed:                 Not Supported
00:29:37.908    SGL Bit Bucket Descriptor: Not Supported
00:29:37.908    SGL Metadata Pointer:      Not Supported
00:29:37.908    Oversized SGL:             Not Supported
00:29:37.908    SGL Metadata Address:      Not Supported
00:29:37.908    SGL Offset:                Not Supported
00:29:37.908    Transport SGL Data Block:  Not Supported
00:29:37.908  Replay Protected Memory Block:  Not Supported
00:29:37.908  
00:29:37.908  Firmware Slot Information
00:29:37.908  =========================
00:29:37.908  Active slot:                 1
00:29:37.908  Slot 1 Firmware Revision:    1.0
00:29:37.908  
00:29:37.908  
00:29:37.908  Commands Supported and Effects
00:29:37.908  ==============================
00:29:37.908  Admin Commands
00:29:37.908  --------------
00:29:37.908     Delete I/O Submission Queue (00h): Supported 
00:29:37.908     Create I/O Submission Queue (01h): Supported 
00:29:37.908                    Get Log Page (02h): Supported 
00:29:37.908     Delete I/O Completion Queue (04h): Supported 
00:29:37.908     Create I/O Completion Queue (05h): Supported 
00:29:37.908                        Identify (06h): Supported 
00:29:37.908                           Abort (08h): Supported 
00:29:37.908                    Set Features (09h): Supported 
00:29:37.908                    Get Features (0Ah): Supported 
00:29:37.908      Asynchronous Event Request (0Ch): Supported 
00:29:37.908            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:29:37.908                  Directive Send (19h): Supported 
00:29:37.908               Directive Receive (1Ah): Supported 
00:29:37.908       Virtualization Management (1Ch): Supported 
00:29:37.908          Doorbell Buffer Config (7Ch): Supported 
00:29:37.908                      Format NVM (80h): Supported LBA-Change 
00:29:37.908  I/O Commands
00:29:37.908  ------------
00:29:37.908                           Flush (00h): Supported LBA-Change 
00:29:37.908                           Write (01h): Supported LBA-Change 
00:29:37.908                            Read (02h): Supported 
00:29:37.908                         Compare (05h): Supported 
00:29:37.908                    Write Zeroes (08h): Supported LBA-Change 
00:29:37.908              Dataset Management (09h): Supported LBA-Change 
00:29:37.908                         Unknown (0Ch): Supported 
00:29:37.908                         Unknown (12h): Supported 
00:29:37.908                            Copy (19h): Supported LBA-Change 
00:29:37.908                         Unknown (1Dh): Supported LBA-Change 
00:29:37.908  
00:29:37.908  Error Log
00:29:37.908  =========
00:29:37.908  
00:29:37.908  Arbitration
00:29:37.908  ===========
00:29:37.908  Arbitration Burst:           no limit
00:29:37.908  
00:29:37.908  Power Management
00:29:37.908  ================
00:29:37.908  Number of Power States:          1
00:29:37.908  Current Power State:             Power State #0
00:29:37.908  Power State #0:
00:29:37.908    Max Power:                     25.00 W
00:29:37.908    Non-Operational State:         Operational
00:29:37.908    Entry Latency:                 16 microseconds
00:29:37.908    Exit Latency:                  4 microseconds
00:29:37.908    Relative Read Throughput:      0
00:29:37.908    Relative Read Latency:         0
00:29:37.908    Relative Write Throughput:     0
00:29:37.908    Relative Write Latency:        0
00:29:37.908    Idle Power:                     Not Reported
00:29:37.908    Active Power:                   Not Reported
00:29:37.908  Non-Operational Permissive Mode: Not Supported
00:29:37.908  
00:29:37.908  Health Information
00:29:37.908  ==================
00:29:37.908  Critical Warnings:
00:29:37.908    Available Spare Space:     OK
00:29:37.908    Temperature:               OK
00:29:37.908    Device Reliability:        OK
00:29:37.908    Read Only:                 No
00:29:37.908    Volatile Memory Backup:    OK
00:29:37.908  Current Temperature:         323 Kelvin (50 Celsius)
00:29:37.908  Temperature Threshold:       343 Kelvin (70 Celsius)
00:29:37.908  Available Spare:             0%
00:29:37.908  Available Spare Threshold:   0%
00:29:37.908  Life Percentage Used:        0%
00:29:37.908  Data Units Read:             8585
00:29:37.908  Data Units Written:          4199
00:29:37.908  Host Read Commands:          298510
00:29:37.908  Host Write Commands:         164499
00:29:37.908  Controller Busy Time:        0 minutes
00:29:37.908  Power Cycles:                0
00:29:37.908  Power On Hours:              0 hours
00:29:37.908  Unsafe Shutdowns:            0
00:29:37.908  Unrecoverable Media Errors:  0
00:29:37.908  Lifetime Error Log Entries:  0
00:29:37.908  Warning Temperature Time:    0 minutes
00:29:37.908  Critical Temperature Time:   0 minutes
00:29:37.908  
00:29:37.908  Number of Queues
00:29:37.908  ================
00:29:37.908  Number of I/O Submission Queues:      64
00:29:37.908  Number of I/O Completion Queues:      64
00:29:37.908  
00:29:37.908  ZNS Specific Controller Data
00:29:37.908  ============================
00:29:37.908  Zone Append Size Limit:      0
00:29:37.908  
00:29:37.908  
00:29:37.908  Active Namespaces
00:29:37.908  =================
00:29:37.908  Namespace ID:1
00:29:37.908  Error Recovery Timeout:                Unlimited
00:29:37.908  Command Set Identifier:                NVM (00h)
00:29:37.908  Deallocate:                            Supported
00:29:37.908  Deallocated/Unwritten Error:           Supported
00:29:37.908  Deallocated Read Value:                All 0x00
00:29:37.908  Deallocate in Write Zeroes:            Not Supported
00:29:37.908  Deallocated Guard Field:               0xFFFF
00:29:37.908  Flush:                                 Supported
00:29:37.908  Reservation:                           Not Supported
00:29:37.908  Namespace Sharing Capabilities:        Private
00:29:37.908  Size (in LBAs):                        1310720 (5GiB)
00:29:37.908  Capacity (in LBAs):                    1310720 (5GiB)
00:29:37.908  Utilization (in LBAs):                 1310720 (5GiB)
00:29:37.908  Thin Provisioning:                     Not Supported
00:29:37.908  Per-NS Atomic Units:                   No
00:29:37.908  Maximum Single Source Range Length:    128
00:29:37.908  Maximum Copy Length:                   128
00:29:37.908  Maximum Source Range Count:            128
00:29:37.908  NGUID/EUI64 Never Reused:              No
00:29:37.908  Namespace Write Protected:             No
00:29:37.908  Number of LBA Formats:                 8
00:29:37.908  Current LBA Format:                    LBA Format #04
00:29:37.908  LBA Format #00: Data Size:   512  Metadata Size:     0
00:29:37.908  LBA Format #01: Data Size:   512  Metadata Size:     8
00:29:37.908  LBA Format #02: Data Size:   512  Metadata Size:    16
00:29:37.908  LBA Format #03: Data Size:   512  Metadata Size:    64
00:29:37.908  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:29:37.908  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:29:37.908  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:29:37.908  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:29:37.908  
00:29:37.908  
00:29:37.908  real	0m0.674s
00:29:37.908  user	0m0.296s
00:29:37.908  sys	0m0.279s
00:29:37.908   00:03:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:37.908   00:03:08	-- common/autotest_common.sh@10 -- # set +x
00:29:37.908  ************************************
00:29:37.908  END TEST nvme_identify
00:29:37.908  ************************************
00:29:37.908   00:03:08	-- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:29:37.908   00:03:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:37.908   00:03:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:37.908   00:03:08	-- common/autotest_common.sh@10 -- # set +x
00:29:37.908  ************************************
00:29:37.908  START TEST nvme_perf
00:29:37.908  ************************************
00:29:37.908   00:03:08	-- common/autotest_common.sh@1114 -- # nvme_perf
00:29:37.908   00:03:08	-- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:29:39.287  Initializing NVMe Controllers
00:29:39.287  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:29:39.287  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:29:39.287  Initialization complete. Launching workers.
00:29:39.287  ========================================================
00:29:39.287                                                                             Latency(us)
00:29:39.287  Device Information                     :       IOPS      MiB/s    Average        min        max
00:29:39.287  PCIE (0000:00:06.0) NSID 1 from core  0:   54912.00     643.50    2330.71    1319.86    7460.75
00:29:39.287  ========================================================
00:29:39.287  Total                                  :   54912.00     643.50    2330.71    1319.86    7460.75
00:29:39.287  
00:29:39.287  Summary latency data for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:29:39.287  =================================================================================
00:29:39.287    1.00000% :  1489.455us
00:29:39.287   10.00000% :  1675.636us
00:29:39.287   25.00000% :  1921.396us
00:29:39.287   50.00000% :  2308.655us
00:29:39.287   75.00000% :  2695.913us
00:29:39.287   90.00000% :  2949.120us
00:29:39.287   95.00000% :  3202.327us
00:29:39.287   98.00000% :  3574.691us
00:29:39.287   99.00000% :  3708.742us
00:29:39.287   99.50000% :  3842.793us
00:29:39.287   99.90000% :  5630.138us
00:29:39.287   99.99000% :  7268.538us
00:29:39.287   99.99900% :  7477.062us
00:29:39.287   99.99990% :  7477.062us
00:29:39.287   99.99999% :  7477.062us
00:29:39.287  
00:29:39.287  Latency histogram for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:29:39.287  ==============================================================================
00:29:39.287         Range in us     Cumulative    IO count
00:29:39.287   1318.167 -  1325.615:    0.0036%  (        2)
00:29:39.287   1333.062 -  1340.509:    0.0073%  (        2)
00:29:39.287   1340.509 -  1347.956:    0.0127%  (        3)
00:29:39.287   1347.956 -  1355.404:    0.0146%  (        1)
00:29:39.287   1355.404 -  1362.851:    0.0237%  (        5)
00:29:39.287   1362.851 -  1370.298:    0.0437%  (       11)
00:29:39.287   1370.298 -  1377.745:    0.0492%  (        3)
00:29:39.287   1377.745 -  1385.193:    0.0637%  (        8)
00:29:39.287   1385.193 -  1392.640:    0.0710%  (        4)
00:29:39.287   1392.640 -  1400.087:    0.0892%  (       10)
00:29:39.287   1400.087 -  1407.535:    0.1093%  (       11)
00:29:39.287   1407.535 -  1414.982:    0.1257%  (        9)
00:29:39.287   1414.982 -  1422.429:    0.1603%  (       19)
00:29:39.287   1422.429 -  1429.876:    0.2021%  (       23)
00:29:39.287   1429.876 -  1437.324:    0.2495%  (       26)
00:29:39.287   1437.324 -  1444.771:    0.3205%  (       39)
00:29:39.287   1444.771 -  1452.218:    0.4025%  (       45)
00:29:39.287   1452.218 -  1459.665:    0.5245%  (       67)
00:29:39.287   1459.665 -  1467.113:    0.6337%  (       60)
00:29:39.287   1467.113 -  1474.560:    0.7558%  (       67)
00:29:39.287   1474.560 -  1482.007:    0.9033%  (       81)
00:29:39.287   1482.007 -  1489.455:    1.0581%  (       85)
00:29:39.287   1489.455 -  1496.902:    1.2347%  (       97)
00:29:39.287   1496.902 -  1504.349:    1.4405%  (      113)
00:29:39.287   1504.349 -  1511.796:    1.6718%  (      127)
00:29:39.287   1511.796 -  1519.244:    1.9322%  (      143)
00:29:39.287   1519.244 -  1526.691:    2.2108%  (      153)
00:29:39.287   1526.691 -  1534.138:    2.5295%  (      175)
00:29:39.287   1534.138 -  1541.585:    2.8537%  (      178)
00:29:39.287   1541.585 -  1549.033:    3.2161%  (      199)
00:29:39.287   1549.033 -  1556.480:    3.5530%  (      185)
00:29:39.287   1556.480 -  1563.927:    3.9226%  (      203)
00:29:39.287   1563.927 -  1571.375:    4.3215%  (      219)
00:29:39.287   1571.375 -  1578.822:    4.6930%  (      204)
00:29:39.287   1578.822 -  1586.269:    5.1009%  (      224)
00:29:39.287   1586.269 -  1593.716:    5.5161%  (      228)
00:29:39.287   1593.716 -  1601.164:    5.9149%  (      219)
00:29:39.287   1601.164 -  1608.611:    6.3265%  (      226)
00:29:39.287   1608.611 -  1616.058:    6.7308%  (      222)
00:29:39.287   1616.058 -  1623.505:    7.1806%  (      247)
00:29:39.287   1623.505 -  1630.953:    7.6031%  (      232)
00:29:39.287   1630.953 -  1638.400:    8.0565%  (      249)
00:29:39.287   1638.400 -  1645.847:    8.4899%  (      238)
00:29:39.287   1645.847 -  1653.295:    8.9088%  (      230)
00:29:39.287   1653.295 -  1660.742:    9.3623%  (      249)
00:29:39.287   1660.742 -  1668.189:    9.8193%  (      251)
00:29:39.287   1668.189 -  1675.636:   10.2564%  (      240)
00:29:39.287   1675.636 -  1683.084:   10.7244%  (      257)
00:29:39.287   1683.084 -  1690.531:   11.1688%  (      244)
00:29:39.287   1690.531 -  1697.978:   11.6168%  (      246)
00:29:39.287   1697.978 -  1705.425:   12.0593%  (      243)
00:29:39.287   1705.425 -  1712.873:   12.5200%  (      253)
00:29:39.287   1712.873 -  1720.320:   12.9735%  (      249)
00:29:39.287   1720.320 -  1727.767:   13.4306%  (      251)
00:29:39.287   1727.767 -  1735.215:   13.9004%  (      258)
00:29:39.287   1735.215 -  1742.662:   14.3302%  (      236)
00:29:39.287   1742.662 -  1750.109:   14.8073%  (      262)
00:29:39.287   1750.109 -  1757.556:   15.2717%  (      255)
00:29:39.287   1757.556 -  1765.004:   15.7470%  (      261)
00:29:39.287   1765.004 -  1772.451:   16.2059%  (      252)
00:29:39.287   1772.451 -  1779.898:   16.6922%  (      267)
00:29:39.287   1779.898 -  1787.345:   17.1310%  (      241)
00:29:39.287   1787.345 -  1794.793:   17.6264%  (      272)
00:29:39.287   1794.793 -  1802.240:   18.0944%  (      257)
00:29:39.287   1802.240 -  1809.687:   18.5460%  (      248)
00:29:39.287   1809.687 -  1817.135:   19.0195%  (      260)
00:29:39.287   1817.135 -  1824.582:   19.4912%  (      259)
00:29:39.287   1824.582 -  1832.029:   19.9811%  (      269)
00:29:39.287   1832.029 -  1839.476:   20.4163%  (      239)
00:29:39.287   1839.476 -  1846.924:   20.9353%  (      285)
00:29:39.287   1846.924 -  1854.371:   21.3869%  (      248)
00:29:39.287   1854.371 -  1861.818:   21.8604%  (      260)
00:29:39.287   1861.818 -  1869.265:   22.3448%  (      266)
00:29:39.287   1869.265 -  1876.713:   22.7746%  (      236)
00:29:39.287   1876.713 -  1884.160:   23.2809%  (      278)
00:29:39.287   1884.160 -  1891.607:   23.7525%  (      259)
00:29:39.287   1891.607 -  1899.055:   24.2224%  (      258)
00:29:39.287   1899.055 -  1906.502:   24.6977%  (      261)
00:29:39.287   1906.502 -  1921.396:   25.6520%  (      524)
00:29:39.287   1921.396 -  1936.291:   26.5971%  (      519)
00:29:39.287   1936.291 -  1951.185:   27.5605%  (      529)
00:29:39.287   1951.185 -  1966.080:   28.5111%  (      522)
00:29:39.287   1966.080 -  1980.975:   29.4690%  (      526)
00:29:39.287   1980.975 -  1995.869:   30.4196%  (      522)
00:29:39.287   1995.869 -  2010.764:   31.3574%  (      515)
00:29:39.287   2010.764 -  2025.658:   32.3026%  (      519)
00:29:39.287   2025.658 -  2040.553:   33.2769%  (      535)
00:29:39.287   2040.553 -  2055.447:   34.2147%  (      515)
00:29:39.287   2055.447 -  2070.342:   35.1890%  (      535)
00:29:39.287   2070.342 -  2085.236:   36.1342%  (      519)
00:29:39.287   2085.236 -  2100.131:   37.0866%  (      523)
00:29:39.287   2100.131 -  2115.025:   38.0318%  (      519)
00:29:39.287   2115.025 -  2129.920:   38.9933%  (      528)
00:29:39.287   2129.920 -  2144.815:   39.9366%  (      518)
00:29:39.287   2144.815 -  2159.709:   40.8800%  (      518)
00:29:39.287   2159.709 -  2174.604:   41.8652%  (      541)
00:29:39.287   2174.604 -  2189.498:   42.8121%  (      520)
00:29:39.287   2189.498 -  2204.393:   43.7609%  (      521)
00:29:39.287   2204.393 -  2219.287:   44.7389%  (      537)
00:29:39.287   2219.287 -  2234.182:   45.6658%  (      509)
00:29:39.287   2234.182 -  2249.076:   46.6310%  (      530)
00:29:39.287   2249.076 -  2263.971:   47.6216%  (      544)
00:29:39.287   2263.971 -  2278.865:   48.5468%  (      508)
00:29:39.287   2278.865 -  2293.760:   49.5156%  (      532)
00:29:39.287   2293.760 -  2308.655:   50.4935%  (      537)
00:29:39.287   2308.655 -  2323.549:   51.4205%  (      509)
00:29:39.287   2323.549 -  2338.444:   52.3911%  (      533)
00:29:39.288   2338.444 -  2353.338:   53.3490%  (      526)
00:29:39.288   2353.338 -  2368.233:   54.2887%  (      516)
00:29:39.288   2368.233 -  2383.127:   55.2502%  (      528)
00:29:39.288   2383.127 -  2398.022:   56.2063%  (      525)
00:29:39.288   2398.022 -  2412.916:   57.1478%  (      517)
00:29:39.288   2412.916 -  2427.811:   58.1312%  (      540)
00:29:39.288   2427.811 -  2442.705:   59.0927%  (      528)
00:29:39.288   2442.705 -  2457.600:   60.0415%  (      521)
00:29:39.288   2457.600 -  2472.495:   61.0103%  (      532)
00:29:39.288   2472.495 -  2487.389:   61.9555%  (      519)
00:29:39.288   2487.389 -  2502.284:   62.9043%  (      521)
00:29:39.288   2502.284 -  2517.178:   63.8567%  (      523)
00:29:39.288   2517.178 -  2532.073:   64.8164%  (      527)
00:29:39.288   2532.073 -  2546.967:   65.7397%  (      507)
00:29:39.288   2546.967 -  2561.862:   66.7140%  (      535)
00:29:39.288   2561.862 -  2576.756:   67.6774%  (      529)
00:29:39.288   2576.756 -  2591.651:   68.6225%  (      519)
00:29:39.288   2591.651 -  2606.545:   69.5786%  (      525)
00:29:39.288   2606.545 -  2621.440:   70.4910%  (      501)
00:29:39.288   2621.440 -  2636.335:   71.4507%  (      527)
00:29:39.288   2636.335 -  2651.229:   72.3885%  (      515)
00:29:39.288   2651.229 -  2666.124:   73.3301%  (      517)
00:29:39.288   2666.124 -  2681.018:   74.2716%  (      517)
00:29:39.288   2681.018 -  2695.913:   75.1967%  (      508)
00:29:39.288   2695.913 -  2710.807:   76.1619%  (      530)
00:29:39.288   2710.807 -  2725.702:   77.1125%  (      522)
00:29:39.288   2725.702 -  2740.596:   78.0631%  (      522)
00:29:39.288   2740.596 -  2755.491:   79.0101%  (      520)
00:29:39.288   2755.491 -  2770.385:   79.9333%  (      507)
00:29:39.288   2770.385 -  2785.280:   80.8949%  (      528)
00:29:39.288   2785.280 -  2800.175:   81.8218%  (      509)
00:29:39.288   2800.175 -  2815.069:   82.7560%  (      513)
00:29:39.288   2815.069 -  2829.964:   83.6739%  (      504)
00:29:39.288   2829.964 -  2844.858:   84.5771%  (      496)
00:29:39.288   2844.858 -  2859.753:   85.4567%  (      483)
00:29:39.288   2859.753 -  2874.647:   86.3527%  (      492)
00:29:39.288   2874.647 -  2889.542:   87.1576%  (      442)
00:29:39.288   2889.542 -  2904.436:   87.9589%  (      440)
00:29:39.288   2904.436 -  2919.331:   88.7547%  (      437)
00:29:39.288   2919.331 -  2934.225:   89.4322%  (      372)
00:29:39.288   2934.225 -  2949.120:   90.0641%  (      347)
00:29:39.288   2949.120 -  2964.015:   90.6578%  (      326)
00:29:39.288   2964.015 -  2978.909:   91.1640%  (      278)
00:29:39.288   2978.909 -  2993.804:   91.6776%  (      282)
00:29:39.288   2993.804 -  3008.698:   92.1365%  (      252)
00:29:39.288   3008.698 -  3023.593:   92.5317%  (      217)
00:29:39.288   3023.593 -  3038.487:   92.8941%  (      199)
00:29:39.288   3038.487 -  3053.382:   93.1927%  (      164)
00:29:39.288   3053.382 -  3068.276:   93.4604%  (      147)
00:29:39.288   3068.276 -  3083.171:   93.6954%  (      129)
00:29:39.288   3083.171 -  3098.065:   93.8848%  (      104)
00:29:39.288   3098.065 -  3112.960:   94.0705%  (      102)
00:29:39.288   3112.960 -  3127.855:   94.2344%  (       90)
00:29:39.288   3127.855 -  3142.749:   94.3965%  (       89)
00:29:39.288   3142.749 -  3157.644:   94.5567%  (       88)
00:29:39.288   3157.644 -  3172.538:   94.7097%  (       84)
00:29:39.288   3172.538 -  3187.433:   94.8627%  (       84)
00:29:39.288   3187.433 -  3202.327:   95.0011%  (       76)
00:29:39.288   3202.327 -  3217.222:   95.1304%  (       71)
00:29:39.288   3217.222 -  3232.116:   95.2542%  (       68)
00:29:39.288   3232.116 -  3247.011:   95.3799%  (       69)
00:29:39.288   3247.011 -  3261.905:   95.5110%  (       72)
00:29:39.288   3261.905 -  3276.800:   95.6458%  (       74)
00:29:39.288   3276.800 -  3291.695:   95.7769%  (       72)
00:29:39.288   3291.695 -  3306.589:   95.8989%  (       67)
00:29:39.288   3306.589 -  3321.484:   96.0227%  (       68)
00:29:39.288   3321.484 -  3336.378:   96.1447%  (       67)
00:29:39.288   3336.378 -  3351.273:   96.2686%  (       68)
00:29:39.288   3351.273 -  3366.167:   96.3815%  (       62)
00:29:39.288   3366.167 -  3381.062:   96.5017%  (       66)
00:29:39.288   3381.062 -  3395.956:   96.6237%  (       67)
00:29:39.288   3395.956 -  3410.851:   96.7475%  (       68)
00:29:39.288   3410.851 -  3425.745:   96.8786%  (       72)
00:29:39.288   3425.745 -  3440.640:   96.9916%  (       62)
00:29:39.288   3440.640 -  3455.535:   97.1099%  (       65)
00:29:39.288   3455.535 -  3470.429:   97.2319%  (       67)
00:29:39.288   3470.429 -  3485.324:   97.3485%  (       64)
00:29:39.288   3485.324 -  3500.218:   97.4705%  (       67)
00:29:39.288   3500.218 -  3515.113:   97.5925%  (       67)
00:29:39.288   3515.113 -  3530.007:   97.7072%  (       63)
00:29:39.288   3530.007 -  3544.902:   97.8238%  (       64)
00:29:39.288   3544.902 -  3559.796:   97.9440%  (       66)
00:29:39.288   3559.796 -  3574.691:   98.0696%  (       69)
00:29:39.288   3574.691 -  3589.585:   98.1898%  (       66)
00:29:39.288   3589.585 -  3604.480:   98.3009%  (       61)
00:29:39.288   3604.480 -  3619.375:   98.4229%  (       67)
00:29:39.288   3619.375 -  3634.269:   98.5322%  (       60)
00:29:39.288   3634.269 -  3649.164:   98.6451%  (       62)
00:29:39.288   3649.164 -  3664.058:   98.7525%  (       59)
00:29:39.288   3664.058 -  3678.953:   98.8564%  (       57)
00:29:39.288   3678.953 -  3693.847:   98.9547%  (       54)
00:29:39.288   3693.847 -  3708.742:   99.0585%  (       57)
00:29:39.288   3708.742 -  3723.636:   99.1477%  (       49)
00:29:39.288   3723.636 -  3738.531:   99.2096%  (       34)
00:29:39.288   3738.531 -  3753.425:   99.2825%  (       40)
00:29:39.288   3753.425 -  3768.320:   99.3426%  (       33)
00:29:39.288   3768.320 -  3783.215:   99.3936%  (       28)
00:29:39.288   3783.215 -  3798.109:   99.4427%  (       27)
00:29:39.288   3798.109 -  3813.004:   99.4883%  (       25)
00:29:39.288   3813.004 -  3842.793:   99.5666%  (       43)
00:29:39.288   3842.793 -  3872.582:   99.6194%  (       29)
00:29:39.288   3872.582 -  3902.371:   99.6540%  (       19)
00:29:39.288   3902.371 -  3932.160:   99.6795%  (       14)
00:29:39.288   3932.160 -  3961.949:   99.7013%  (       12)
00:29:39.288   3961.949 -  3991.738:   99.7250%  (       13)
00:29:39.288   3991.738 -  4021.527:   99.7414%  (        9)
00:29:39.288   4021.527 -  4051.316:   99.7560%  (        8)
00:29:39.288   4051.316 -  4081.105:   99.7705%  (        8)
00:29:39.288   4081.105 -  4110.895:   99.7833%  (        7)
00:29:39.288   4110.895 -  4140.684:   99.7942%  (        6)
00:29:39.288   4140.684 -  4170.473:   99.7997%  (        3)
00:29:39.288   4170.473 -  4200.262:   99.8088%  (        5)
00:29:39.288   4200.262 -  4230.051:   99.8142%  (        3)
00:29:39.288   4230.051 -  4259.840:   99.8215%  (        4)
00:29:39.288   4259.840 -  4289.629:   99.8252%  (        2)
00:29:39.288   4289.629 -  4319.418:   99.8288%  (        2)
00:29:39.288   4319.418 -  4349.207:   99.8343%  (        3)
00:29:39.288   4349.207 -  4378.996:   99.8361%  (        1)
00:29:39.288   4408.785 -  4438.575:   99.8379%  (        1)
00:29:39.288   4438.575 -  4468.364:   99.8397%  (        1)
00:29:39.288   4468.364 -  4498.153:   99.8416%  (        1)
00:29:39.288   4498.153 -  4527.942:   99.8434%  (        1)
00:29:39.288   4557.731 -  4587.520:   99.8452%  (        1)
00:29:39.288   4587.520 -  4617.309:   99.8470%  (        1)
00:29:39.288   4617.309 -  4647.098:   99.8488%  (        1)
00:29:39.288   4647.098 -  4676.887:   99.8507%  (        1)
00:29:39.288   4676.887 -  4706.676:   99.8525%  (        1)
00:29:39.288   4706.676 -  4736.465:   99.8543%  (        1)
00:29:39.288   4766.255 -  4796.044:   99.8561%  (        1)
00:29:39.288   4796.044 -  4825.833:   99.8580%  (        1)
00:29:39.288   4825.833 -  4855.622:   99.8598%  (        1)
00:29:39.288   4855.622 -  4885.411:   99.8616%  (        1)
00:29:39.288   4885.411 -  4915.200:   99.8634%  (        1)
00:29:39.288   4915.200 -  4944.989:   99.8652%  (        1)
00:29:39.288   4944.989 -  4974.778:   99.8671%  (        1)
00:29:39.288   4974.778 -  5004.567:   99.8689%  (        1)
00:29:39.288   5004.567 -  5034.356:   99.8707%  (        1)
00:29:39.288   5034.356 -  5064.145:   99.8725%  (        1)
00:29:39.288   5093.935 -  5123.724:   99.8743%  (        1)
00:29:39.288   5123.724 -  5153.513:   99.8762%  (        1)
00:29:39.288   5153.513 -  5183.302:   99.8780%  (        1)
00:29:39.288   5183.302 -  5213.091:   99.8798%  (        1)
00:29:39.288   5213.091 -  5242.880:   99.8816%  (        1)
00:29:39.288   5272.669 -  5302.458:   99.8834%  (        1)
00:29:39.288   5302.458 -  5332.247:   99.8853%  (        1)
00:29:39.288   5332.247 -  5362.036:   99.8871%  (        1)
00:29:39.288   5362.036 -  5391.825:   99.8889%  (        1)
00:29:39.288   5391.825 -  5421.615:   99.8907%  (        1)
00:29:39.288   5421.615 -  5451.404:   99.8926%  (        1)
00:29:39.288   5481.193 -  5510.982:   99.8944%  (        1)
00:29:39.288   5510.982 -  5540.771:   99.8962%  (        1)
00:29:39.288   5540.771 -  5570.560:   99.8980%  (        1)
00:29:39.288   5570.560 -  5600.349:   99.8998%  (        1)
00:29:39.288   5600.349 -  5630.138:   99.9017%  (        1)
00:29:39.288   5630.138 -  5659.927:   99.9035%  (        1)
00:29:39.288   5659.927 -  5689.716:   99.9053%  (        1)
00:29:39.288   5689.716 -  5719.505:   99.9071%  (        1)
00:29:39.288   5719.505 -  5749.295:   99.9089%  (        1)
00:29:39.288   5779.084 -  5808.873:   99.9108%  (        1)
00:29:39.288   5808.873 -  5838.662:   99.9126%  (        1)
00:29:39.288   5838.662 -  5868.451:   99.9144%  (        1)
00:29:39.288   5898.240 -  5928.029:   99.9162%  (        1)
00:29:39.288   5928.029 -  5957.818:   99.9181%  (        1)
00:29:39.288   5957.818 -  5987.607:   99.9199%  (        1)
00:29:39.288   5987.607 -  6017.396:   99.9217%  (        1)
00:29:39.288   6017.396 -  6047.185:   99.9235%  (        1)
00:29:39.288   6047.185 -  6076.975:   99.9253%  (        1)
00:29:39.288   6076.975 -  6106.764:   99.9272%  (        1)
00:29:39.288   6106.764 -  6136.553:   99.9290%  (        1)
00:29:39.288   6166.342 -  6196.131:   99.9308%  (        1)
00:29:39.288   6196.131 -  6225.920:   99.9326%  (        1)
00:29:39.288   6225.920 -  6255.709:   99.9344%  (        1)
00:29:39.288   6255.709 -  6285.498:   99.9363%  (        1)
00:29:39.288   6285.498 -  6315.287:   99.9381%  (        1)
00:29:39.288   6315.287 -  6345.076:   99.9399%  (        1)
00:29:39.288   6345.076 -  6374.865:   99.9417%  (        1)
00:29:39.288   6374.865 -  6404.655:   99.9435%  (        1)
00:29:39.288   6404.655 -  6434.444:   99.9454%  (        1)
00:29:39.288   6434.444 -  6464.233:   99.9472%  (        1)
00:29:39.288   6494.022 -  6523.811:   99.9490%  (        1)
00:29:39.288   6523.811 -  6553.600:   99.9508%  (        1)
00:29:39.288   6553.600 -  6583.389:   99.9527%  (        1)
00:29:39.288   6583.389 -  6613.178:   99.9545%  (        1)
00:29:39.288   6613.178 -  6642.967:   99.9563%  (        1)
00:29:39.288   6642.967 -  6672.756:   99.9581%  (        1)
00:29:39.288   6672.756 -  6702.545:   99.9599%  (        1)
00:29:39.288   6702.545 -  6732.335:   99.9618%  (        1)
00:29:39.288   6732.335 -  6762.124:   99.9636%  (        1)
00:29:39.288   6762.124 -  6791.913:   99.9654%  (        1)
00:29:39.288   6791.913 -  6821.702:   99.9672%  (        1)
00:29:39.288   6821.702 -  6851.491:   99.9690%  (        1)
00:29:39.288   6851.491 -  6881.280:   99.9709%  (        1)
00:29:39.288   6911.069 -  6940.858:   99.9727%  (        1)
00:29:39.288   6940.858 -  6970.647:   99.9745%  (        1)
00:29:39.288   6970.647 -  7000.436:   99.9763%  (        1)
00:29:39.289   7000.436 -  7030.225:   99.9781%  (        1)
00:29:39.289   7030.225 -  7060.015:   99.9800%  (        1)
00:29:39.289   7060.015 -  7089.804:   99.9818%  (        1)
00:29:39.289   7089.804 -  7119.593:   99.9836%  (        1)
00:29:39.289   7119.593 -  7149.382:   99.9854%  (        1)
00:29:39.289   7179.171 -  7208.960:   99.9873%  (        1)
00:29:39.289   7208.960 -  7238.749:   99.9891%  (        1)
00:29:39.289   7238.749 -  7268.538:   99.9909%  (        1)
00:29:39.289   7268.538 -  7298.327:   99.9927%  (        1)
00:29:39.289   7298.327 -  7328.116:   99.9945%  (        1)
00:29:39.289   7328.116 -  7357.905:   99.9964%  (        1)
00:29:39.289   7357.905 -  7387.695:   99.9982%  (        1)
00:29:39.289   7447.273 -  7477.062:  100.0000%  (        1)
00:29:39.289  
00:29:39.289   00:03:09	-- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:29:40.668  Initializing NVMe Controllers
00:29:40.668  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:29:40.668  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:29:40.668  Initialization complete. Launching workers.
00:29:40.668  ========================================================
00:29:40.668                                                                             Latency(us)
00:29:40.668  Device Information                     :       IOPS      MiB/s    Average        min        max
00:29:40.668  PCIE (0000:00:06.0) NSID 1 from core  0:   59548.00     697.83    2151.36    1076.33   11341.47
00:29:40.668  ========================================================
00:29:40.668  Total                                  :   59548.00     697.83    2151.36    1076.33   11341.47
00:29:40.668  
00:29:40.668  Summary latency data for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:29:40.668  =================================================================================
00:29:40.668    1.00000% :  1578.822us
00:29:40.668   10.00000% :  1832.029us
00:29:40.668   25.00000% :  1966.080us
00:29:40.668   50.00000% :  2115.025us
00:29:40.668   75.00000% :  2293.760us
00:29:40.668   90.00000% :  2532.073us
00:29:40.668   95.00000% :  2725.702us
00:29:40.668   98.00000% :  2949.120us
00:29:40.668   99.00000% :  3112.960us
00:29:40.668   99.50000% :  3425.745us
00:29:40.668   99.90000% :  4259.840us
00:29:40.668   99.99000% : 11200.698us
00:29:40.668   99.99900% : 11379.433us
00:29:40.668   99.99990% : 11379.433us
00:29:40.668   99.99999% : 11379.433us
00:29:40.668  
00:29:40.668  Latency histogram for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:29:40.668  ==============================================================================
00:29:40.668         Range in us     Cumulative    IO count
00:29:40.668   1072.407 -  1079.855:    0.0034%  (        2)
00:29:40.668   1131.985 -  1139.433:    0.0050%  (        1)
00:29:40.668   1139.433 -  1146.880:    0.0084%  (        2)
00:29:40.668   1184.116 -  1191.564:    0.0101%  (        1)
00:29:40.668   1191.564 -  1199.011:    0.0453%  (       21)
00:29:40.668   1206.458 -  1213.905:    0.0470%  (        1)
00:29:40.668   1213.905 -  1221.353:    0.0504%  (        2)
00:29:40.668   1228.800 -  1236.247:    0.0521%  (        1)
00:29:40.668   1236.247 -  1243.695:    0.0571%  (        3)
00:29:40.668   1243.695 -  1251.142:    0.0605%  (        2)
00:29:40.668   1251.142 -  1258.589:    0.0638%  (        2)
00:29:40.668   1258.589 -  1266.036:    0.0672%  (        2)
00:29:40.668   1266.036 -  1273.484:    0.0772%  (        6)
00:29:40.668   1273.484 -  1280.931:    0.0789%  (        1)
00:29:40.668   1280.931 -  1288.378:    0.0856%  (        4)
00:29:40.668   1288.378 -  1295.825:    0.0957%  (        6)
00:29:40.668   1303.273 -  1310.720:    0.1092%  (        8)
00:29:40.668   1310.720 -  1318.167:    0.1142%  (        3)
00:29:40.668   1318.167 -  1325.615:    0.1243%  (        6)
00:29:40.668   1325.615 -  1333.062:    0.1343%  (        6)
00:29:40.668   1333.062 -  1340.509:    0.1394%  (        3)
00:29:40.668   1340.509 -  1347.956:    0.1461%  (        4)
00:29:40.668   1347.956 -  1355.404:    0.1562%  (        6)
00:29:40.668   1355.404 -  1362.851:    0.1595%  (        2)
00:29:40.668   1362.851 -  1370.298:    0.1814%  (       13)
00:29:40.668   1370.298 -  1377.745:    0.1881%  (        4)
00:29:40.668   1377.745 -  1385.193:    0.1982%  (        6)
00:29:40.668   1385.193 -  1392.640:    0.2066%  (        5)
00:29:40.668   1392.640 -  1400.087:    0.2267%  (       12)
00:29:40.668   1400.087 -  1407.535:    0.2351%  (        5)
00:29:40.668   1407.535 -  1414.982:    0.2569%  (       13)
00:29:40.668   1414.982 -  1422.429:    0.2670%  (        6)
00:29:40.668   1422.429 -  1429.876:    0.2922%  (       15)
00:29:40.668   1429.876 -  1437.324:    0.2972%  (        3)
00:29:40.668   1437.324 -  1444.771:    0.3174%  (       12)
00:29:40.668   1444.771 -  1452.218:    0.3375%  (       12)
00:29:40.668   1452.218 -  1459.665:    0.3527%  (        9)
00:29:40.668   1459.665 -  1467.113:    0.3694%  (       10)
00:29:40.668   1467.113 -  1474.560:    0.4081%  (       23)
00:29:40.668   1474.560 -  1482.007:    0.4282%  (       12)
00:29:40.668   1482.007 -  1489.455:    0.4601%  (       19)
00:29:40.668   1489.455 -  1496.902:    0.4752%  (        9)
00:29:40.668   1496.902 -  1504.349:    0.5021%  (       16)
00:29:40.668   1504.349 -  1511.796:    0.5189%  (       10)
00:29:40.668   1511.796 -  1519.244:    0.5374%  (       11)
00:29:40.668   1519.244 -  1526.691:    0.5760%  (       23)
00:29:40.668   1526.691 -  1534.138:    0.6146%  (       23)
00:29:40.668   1534.138 -  1541.585:    0.7120%  (       58)
00:29:40.668   1541.585 -  1549.033:    0.7708%  (       35)
00:29:40.668   1549.033 -  1556.480:    0.8279%  (       34)
00:29:40.668   1556.480 -  1563.927:    0.9337%  (       63)
00:29:40.668   1563.927 -  1571.375:    0.9874%  (       32)
00:29:40.668   1571.375 -  1578.822:    1.0630%  (       45)
00:29:40.668   1578.822 -  1586.269:    1.1554%  (       55)
00:29:40.668   1586.269 -  1593.716:    1.2662%  (       66)
00:29:40.668   1593.716 -  1601.164:    1.3636%  (       58)
00:29:40.668   1601.164 -  1608.611:    1.4358%  (       43)
00:29:40.668   1608.611 -  1616.058:    1.5198%  (       50)
00:29:40.668   1616.058 -  1623.505:    1.5987%  (       47)
00:29:40.668   1623.505 -  1630.953:    1.6793%  (       48)
00:29:40.668   1630.953 -  1638.400:    1.8271%  (       88)
00:29:40.668   1638.400 -  1645.847:    1.9178%  (       54)
00:29:40.668   1645.847 -  1653.295:    2.0353%  (       70)
00:29:40.668   1653.295 -  1660.742:    2.1378%  (       61)
00:29:40.668   1660.742 -  1668.189:    2.2637%  (       75)
00:29:40.668   1668.189 -  1675.636:    2.4283%  (       98)
00:29:40.668   1675.636 -  1683.084:    2.6113%  (      109)
00:29:40.668   1683.084 -  1690.531:    2.8145%  (      121)
00:29:40.668   1690.531 -  1697.978:    3.0026%  (      112)
00:29:40.668   1697.978 -  1705.425:    3.2058%  (      121)
00:29:40.668   1705.425 -  1712.873:    3.4308%  (      134)
00:29:40.668   1712.873 -  1720.320:    3.6592%  (      136)
00:29:40.668   1720.320 -  1727.767:    3.9044%  (      146)
00:29:40.668   1727.767 -  1735.215:    4.1899%  (      170)
00:29:40.668   1735.215 -  1742.662:    4.4703%  (      167)
00:29:40.668   1742.662 -  1750.109:    4.7458%  (      164)
00:29:40.668   1750.109 -  1757.556:    5.1118%  (      218)
00:29:40.668   1757.556 -  1765.004:    5.5149%  (      240)
00:29:40.668   1765.004 -  1772.451:    5.9213%  (      242)
00:29:40.668   1772.451 -  1779.898:    6.3529%  (      257)
00:29:40.668   1779.898 -  1787.345:    6.7693%  (      248)
00:29:40.668   1787.345 -  1794.793:    7.3302%  (      334)
00:29:40.668   1794.793 -  1802.240:    7.9029%  (      341)
00:29:40.668   1802.240 -  1809.687:    8.3966%  (      294)
00:29:40.668   1809.687 -  1817.135:    8.9457%  (      327)
00:29:40.669   1817.135 -  1824.582:    9.6510%  (      420)
00:29:40.669   1824.582 -  1832.029:   10.3228%  (      400)
00:29:40.669   1832.029 -  1839.476:   11.0869%  (      455)
00:29:40.669   1839.476 -  1846.924:   11.7619%  (      402)
00:29:40.669   1846.924 -  1854.371:   12.5663%  (      479)
00:29:40.669   1854.371 -  1861.818:   13.2800%  (      425)
00:29:40.669   1861.818 -  1869.265:   14.2305%  (      566)
00:29:40.669   1869.265 -  1876.713:   15.0249%  (      473)
00:29:40.669   1876.713 -  1884.160:   15.7638%  (      440)
00:29:40.669   1884.160 -  1891.607:   16.5849%  (      489)
00:29:40.669   1891.607 -  1899.055:   17.3910%  (      480)
00:29:40.669   1899.055 -  1906.502:   18.2777%  (      528)
00:29:40.669   1906.502 -  1921.396:   20.3416%  (     1229)
00:29:40.669   1921.396 -  1936.291:   22.1636%  (     1085)
00:29:40.669   1936.291 -  1951.185:   24.5567%  (     1425)
00:29:40.669   1951.185 -  1966.080:   26.9732%  (     1439)
00:29:40.669   1966.080 -  1980.975:   29.5896%  (     1558)
00:29:40.669   1980.975 -  1995.869:   31.8012%  (     1317)
00:29:40.669   1995.869 -  2010.764:   34.4596%  (     1583)
00:29:40.669   2010.764 -  2025.658:   36.8795%  (     1441)
00:29:40.669   2025.658 -  2040.553:   39.1718%  (     1365)
00:29:40.669   2040.553 -  2055.447:   41.5077%  (     1391)
00:29:40.669   2055.447 -  2070.342:   43.9528%  (     1456)
00:29:40.669   2070.342 -  2085.236:   46.4650%  (     1496)
00:29:40.669   2085.236 -  2100.131:   48.9068%  (     1454)
00:29:40.669   2100.131 -  2115.025:   51.2024%  (     1367)
00:29:40.669   2115.025 -  2129.920:   53.7751%  (     1532)
00:29:40.669   2129.920 -  2144.815:   56.0892%  (     1378)
00:29:40.669   2144.815 -  2159.709:   58.3966%  (     1374)
00:29:40.669   2159.709 -  2174.604:   60.3278%  (     1150)
00:29:40.669   2174.604 -  2189.498:   62.4941%  (     1290)
00:29:40.669   2189.498 -  2204.393:   64.6655%  (     1293)
00:29:40.669   2204.393 -  2219.287:   66.6807%  (     1200)
00:29:40.669   2219.287 -  2234.182:   68.6841%  (     1193)
00:29:40.669   2234.182 -  2249.076:   70.5918%  (     1136)
00:29:40.669   2249.076 -  2263.971:   72.3719%  (     1060)
00:29:40.669   2263.971 -  2278.865:   74.1234%  (     1043)
00:29:40.669   2278.865 -  2293.760:   75.7742%  (      983)
00:29:40.669   2293.760 -  2308.655:   77.3796%  (      956)
00:29:40.669   2308.655 -  2323.549:   78.8675%  (      886)
00:29:40.669   2323.549 -  2338.444:   80.2731%  (      837)
00:29:40.669   2338.444 -  2353.338:   81.7374%  (      872)
00:29:40.669   2353.338 -  2368.233:   82.8323%  (      652)
00:29:40.669   2368.233 -  2383.127:   83.7812%  (      565)
00:29:40.669   2383.127 -  2398.022:   84.6964%  (      545)
00:29:40.669   2398.022 -  2412.916:   85.5360%  (      500)
00:29:40.669   2412.916 -  2427.811:   86.2548%  (      428)
00:29:40.669   2427.811 -  2442.705:   86.9769%  (      430)
00:29:40.669   2442.705 -  2457.600:   87.6822%  (      420)
00:29:40.669   2457.600 -  2472.495:   88.2448%  (      335)
00:29:40.669   2472.495 -  2487.389:   88.7603%  (      307)
00:29:40.669   2487.389 -  2502.284:   89.3179%  (      332)
00:29:40.669   2502.284 -  2517.178:   89.8082%  (      292)
00:29:40.669   2517.178 -  2532.073:   90.3087%  (      298)
00:29:40.669   2532.073 -  2546.967:   90.7587%  (      268)
00:29:40.669   2546.967 -  2561.862:   91.1920%  (      258)
00:29:40.669   2561.862 -  2576.756:   91.6555%  (      276)
00:29:40.669   2576.756 -  2591.651:   92.0787%  (      252)
00:29:40.669   2591.651 -  2606.545:   92.4649%  (      230)
00:29:40.669   2606.545 -  2621.440:   92.8209%  (      212)
00:29:40.669   2621.440 -  2636.335:   93.2139%  (      234)
00:29:40.669   2636.335 -  2651.229:   93.5497%  (      200)
00:29:40.669   2651.229 -  2666.124:   93.8839%  (      199)
00:29:40.669   2666.124 -  2681.018:   94.1812%  (      177)
00:29:40.669   2681.018 -  2695.913:   94.4851%  (      181)
00:29:40.669   2695.913 -  2710.807:   94.7572%  (      162)
00:29:40.669   2710.807 -  2725.702:   95.0158%  (      154)
00:29:40.669   2725.702 -  2740.596:   95.2811%  (      158)
00:29:40.669   2740.596 -  2755.491:   95.5347%  (      151)
00:29:40.669   2755.491 -  2770.385:   95.7765%  (      144)
00:29:40.669   2770.385 -  2785.280:   96.0083%  (      138)
00:29:40.669   2785.280 -  2800.175:   96.2098%  (      120)
00:29:40.669   2800.175 -  2815.069:   96.4247%  (      128)
00:29:40.669   2815.069 -  2829.964:   96.6380%  (      127)
00:29:40.669   2829.964 -  2844.858:   96.8261%  (      112)
00:29:40.669   2844.858 -  2859.753:   97.0226%  (      117)
00:29:40.669   2859.753 -  2874.647:   97.2123%  (      113)
00:29:40.669   2874.647 -  2889.542:   97.4155%  (      121)
00:29:40.669   2889.542 -  2904.436:   97.5784%  (       97)
00:29:40.669   2904.436 -  2919.331:   97.7312%  (       91)
00:29:40.669   2919.331 -  2934.225:   97.8874%  (       93)
00:29:40.669   2934.225 -  2949.120:   98.0234%  (       81)
00:29:40.669   2949.120 -  2964.015:   98.1561%  (       79)
00:29:40.669   2964.015 -  2978.909:   98.2837%  (       76)
00:29:40.669   2978.909 -  2993.804:   98.4013%  (       70)
00:29:40.669   2993.804 -  3008.698:   98.5071%  (       63)
00:29:40.669   3008.698 -  3023.593:   98.6078%  (       60)
00:29:40.669   3023.593 -  3038.487:   98.6968%  (       53)
00:29:40.669   3038.487 -  3053.382:   98.7674%  (       42)
00:29:40.669   3053.382 -  3068.276:   98.8581%  (       54)
00:29:40.669   3068.276 -  3083.171:   98.9084%  (       30)
00:29:40.669   3083.171 -  3098.065:   98.9639%  (       33)
00:29:40.669   3098.065 -  3112.960:   99.0025%  (       23)
00:29:40.669   3112.960 -  3127.855:   99.0394%  (       22)
00:29:40.669   3127.855 -  3142.749:   99.0713%  (       19)
00:29:40.669   3142.749 -  3157.644:   99.0932%  (       13)
00:29:40.669   3157.644 -  3172.538:   99.1217%  (       17)
00:29:40.669   3172.538 -  3187.433:   99.1452%  (       14)
00:29:40.669   3187.433 -  3202.327:   99.1704%  (       15)
00:29:40.669   3202.327 -  3217.222:   99.2090%  (       23)
00:29:40.669   3217.222 -  3232.116:   99.2359%  (       16)
00:29:40.669   3232.116 -  3247.011:   99.2661%  (       18)
00:29:40.669   3247.011 -  3261.905:   99.2930%  (       16)
00:29:40.669   3261.905 -  3276.800:   99.3199%  (       16)
00:29:40.669   3276.800 -  3291.695:   99.3467%  (       16)
00:29:40.669   3291.695 -  3306.589:   99.3686%  (       13)
00:29:40.669   3306.589 -  3321.484:   99.3904%  (       13)
00:29:40.669   3321.484 -  3336.378:   99.4122%  (       13)
00:29:40.669   3336.378 -  3351.273:   99.4290%  (       10)
00:29:40.669   3351.273 -  3366.167:   99.4559%  (       16)
00:29:40.669   3366.167 -  3381.062:   99.4710%  (        9)
00:29:40.669   3381.062 -  3395.956:   99.4828%  (        7)
00:29:40.669   3395.956 -  3410.851:   99.4962%  (        8)
00:29:40.669   3410.851 -  3425.745:   99.5080%  (        7)
00:29:40.669   3425.745 -  3440.640:   99.5231%  (        9)
00:29:40.669   3440.640 -  3455.535:   99.5365%  (        8)
00:29:40.669   3455.535 -  3470.429:   99.5516%  (        9)
00:29:40.669   3470.429 -  3485.324:   99.5617%  (        6)
00:29:40.669   3485.324 -  3500.218:   99.5735%  (        7)
00:29:40.669   3500.218 -  3515.113:   99.5869%  (        8)
00:29:40.669   3515.113 -  3530.007:   99.5953%  (        5)
00:29:40.669   3530.007 -  3544.902:   99.6054%  (        6)
00:29:40.669   3544.902 -  3559.796:   99.6138%  (        5)
00:29:40.669   3559.796 -  3574.691:   99.6205%  (        4)
00:29:40.669   3574.691 -  3589.585:   99.6306%  (        6)
00:29:40.669   3589.585 -  3604.480:   99.6373%  (        4)
00:29:40.669   3604.480 -  3619.375:   99.6389%  (        1)
00:29:40.669   3619.375 -  3634.269:   99.6524%  (        8)
00:29:40.669   3634.269 -  3649.164:   99.6658%  (        8)
00:29:40.669   3649.164 -  3664.058:   99.6759%  (        6)
00:29:40.669   3664.058 -  3678.953:   99.6876%  (        7)
00:29:40.669   3678.953 -  3693.847:   99.6977%  (        6)
00:29:40.669   3693.847 -  3708.742:   99.7078%  (        6)
00:29:40.669   3708.742 -  3723.636:   99.7128%  (        3)
00:29:40.669   3723.636 -  3738.531:   99.7330%  (       12)
00:29:40.669   3738.531 -  3753.425:   99.7464%  (        8)
00:29:40.669   3753.425 -  3768.320:   99.7548%  (        5)
00:29:40.669   3768.320 -  3783.215:   99.7615%  (        4)
00:29:40.669   3783.215 -  3798.109:   99.7716%  (        6)
00:29:40.669   3798.109 -  3813.004:   99.7817%  (        6)
00:29:40.669   3813.004 -  3842.793:   99.8136%  (       19)
00:29:40.669   3842.793 -  3872.582:   99.8287%  (        9)
00:29:40.669   3872.582 -  3902.371:   99.8438%  (        9)
00:29:40.669   3902.371 -  3932.160:   99.8539%  (        6)
00:29:40.669   3932.160 -  3961.949:   99.8623%  (        5)
00:29:40.669   3961.949 -  3991.738:   99.8724%  (        6)
00:29:40.669   3991.738 -  4021.527:   99.8757%  (        2)
00:29:40.669   4021.527 -  4051.316:   99.8774%  (        1)
00:29:40.669   4051.316 -  4081.105:   99.8808%  (        2)
00:29:40.669   4081.105 -  4110.895:   99.8824%  (        1)
00:29:40.669   4110.895 -  4140.684:   99.8841%  (        1)
00:29:40.669   4140.684 -  4170.473:   99.8892%  (        3)
00:29:40.669   4170.473 -  4200.262:   99.8942%  (        3)
00:29:40.669   4200.262 -  4230.051:   99.8992%  (        3)
00:29:40.669   4230.051 -  4259.840:   99.9009%  (        1)
00:29:40.669   4259.840 -  4289.629:   99.9026%  (        1)
00:29:40.669   4289.629 -  4319.418:   99.9043%  (        1)
00:29:40.669   4319.418 -  4349.207:   99.9060%  (        1)
00:29:40.669   4349.207 -  4378.996:   99.9093%  (        2)
00:29:40.669   4378.996 -  4408.785:   99.9110%  (        1)
00:29:40.669   4408.785 -  4438.575:   99.9127%  (        1)
00:29:40.669   4438.575 -  4468.364:   99.9144%  (        1)
00:29:40.669   4468.364 -  4498.153:   99.9177%  (        2)
00:29:40.669   4527.942 -  4557.731:   99.9211%  (        2)
00:29:40.669   4557.731 -  4587.520:   99.9228%  (        1)
00:29:40.669   4587.520 -  4617.309:   99.9244%  (        1)
00:29:40.669   4617.309 -  4647.098:   99.9261%  (        1)
00:29:40.669   4647.098 -  4676.887:   99.9278%  (        1)
00:29:40.669   4676.887 -  4706.676:   99.9311%  (        2)
00:29:40.669   4706.676 -  4736.465:   99.9328%  (        1)
00:29:40.669   4766.255 -  4796.044:   99.9345%  (        1)
00:29:40.669   4796.044 -  4825.833:   99.9362%  (        1)
00:29:40.669   4825.833 -  4855.622:   99.9395%  (        2)
00:29:40.669   4855.622 -  4885.411:   99.9412%  (        1)
00:29:40.669   4885.411 -  4915.200:   99.9429%  (        1)
00:29:40.669   4915.200 -  4944.989:   99.9446%  (        1)
00:29:40.669   4944.989 -  4974.778:   99.9479%  (        2)
00:29:40.669   4974.778 -  5004.567:   99.9496%  (        1)
00:29:40.669   5004.567 -  5034.356:   99.9513%  (        1)
00:29:40.669   5034.356 -  5064.145:   99.9530%  (        1)
00:29:40.669   5064.145 -  5093.935:   99.9547%  (        1)
00:29:40.669   5093.935 -  5123.724:   99.9563%  (        1)
00:29:40.669   5123.724 -  5153.513:   99.9580%  (        1)
00:29:40.669   5153.513 -  5183.302:   99.9597%  (        1)
00:29:40.669   5183.302 -  5213.091:   99.9614%  (        1)
00:29:40.669   5213.091 -  5242.880:   99.9647%  (        2)
00:29:40.669   5242.880 -  5272.669:   99.9664%  (        1)
00:29:40.669   5272.669 -  5302.458:   99.9698%  (        2)
00:29:40.669   5302.458 -  5332.247:   99.9715%  (        1)
00:29:40.669   5332.247 -  5362.036:   99.9731%  (        1)
00:29:40.669   5391.825 -  5421.615:   99.9748%  (        1)
00:29:40.669   9234.618 -  9294.196:   99.9765%  (        1)
00:29:40.669   9592.087 -  9651.665:   99.9782%  (        1)
00:29:40.670   9711.244 -  9770.822:   99.9798%  (        1)
00:29:40.670   9770.822 -  9830.400:   99.9815%  (        1)
00:29:40.670   9889.978 -  9949.556:   99.9832%  (        1)
00:29:40.670  10843.229 - 10902.807:   99.9849%  (        1)
00:29:40.670  10962.385 - 11021.964:   99.9866%  (        1)
00:29:40.670  11081.542 - 11141.120:   99.9882%  (        1)
00:29:40.670  11141.120 - 11200.698:   99.9916%  (        2)
00:29:40.670  11200.698 - 11260.276:   99.9966%  (        3)
00:29:40.670  11260.276 - 11319.855:   99.9983%  (        1)
00:29:40.670  11319.855 - 11379.433:  100.0000%  (        1)
00:29:40.670  
00:29:40.670   00:03:11	-- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:29:40.670  
00:29:40.670  real	0m2.657s
00:29:40.670  user	0m2.257s
00:29:40.670  sys	0m0.261s
00:29:40.670   00:03:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:40.670   00:03:11	-- common/autotest_common.sh@10 -- # set +x
00:29:40.670  ************************************
00:29:40.670  END TEST nvme_perf
00:29:40.670  ************************************
00:29:40.670   00:03:11	-- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:29:40.670   00:03:11	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:29:40.670   00:03:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:40.670   00:03:11	-- common/autotest_common.sh@10 -- # set +x
00:29:40.670  ************************************
00:29:40.670  START TEST nvme_hello_world
00:29:40.670  ************************************
00:29:40.670   00:03:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:29:40.928  Initializing NVMe Controllers
00:29:40.928  Attached to 0000:00:06.0
00:29:40.928    Namespace ID: 1 size: 5GB
00:29:40.928  Initialization complete.
00:29:40.928  INFO: using host memory buffer for IO
00:29:40.928  Hello world!
00:29:40.928  
00:29:40.928  real	0m0.303s
00:29:40.928  user	0m0.105s
00:29:40.928  sys	0m0.127s
00:29:40.928   00:03:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:40.928   00:03:11	-- common/autotest_common.sh@10 -- # set +x
00:29:40.928  ************************************
00:29:40.928  END TEST nvme_hello_world
00:29:40.928  ************************************
00:29:40.928   00:03:11	-- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:29:40.928   00:03:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:40.928   00:03:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:40.928   00:03:11	-- common/autotest_common.sh@10 -- # set +x
00:29:40.928  ************************************
00:29:40.928  START TEST nvme_sgl
00:29:40.928  ************************************
00:29:40.928   00:03:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:29:41.186  0000:00:06.0: build_io_request_0 Invalid IO length parameter
00:29:41.186  0000:00:06.0: build_io_request_1 Invalid IO length parameter
00:29:41.186  0000:00:06.0: build_io_request_3 Invalid IO length parameter
00:29:41.444  0000:00:06.0: build_io_request_8 Invalid IO length parameter
00:29:41.444  0000:00:06.0: build_io_request_9 Invalid IO length parameter
00:29:41.444  0000:00:06.0: build_io_request_11 Invalid IO length parameter
00:29:41.444  NVMe Readv/Writev Request test
00:29:41.444  Attached to 0000:00:06.0
00:29:41.444  0000:00:06.0: build_io_request_2 test passed
00:29:41.444  0000:00:06.0: build_io_request_4 test passed
00:29:41.444  0000:00:06.0: build_io_request_5 test passed
00:29:41.444  0000:00:06.0: build_io_request_6 test passed
00:29:41.444  0000:00:06.0: build_io_request_7 test passed
00:29:41.444  0000:00:06.0: build_io_request_10 test passed
00:29:41.444  Cleaning up...
00:29:41.444  
00:29:41.444  real	0m0.330s
00:29:41.444  user	0m0.148s
00:29:41.444  sys	0m0.107s
00:29:41.444   00:03:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:41.444   00:03:11	-- common/autotest_common.sh@10 -- # set +x
00:29:41.444  ************************************
00:29:41.444  END TEST nvme_sgl
00:29:41.444  ************************************
00:29:41.444   00:03:12	-- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:29:41.444   00:03:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:41.444   00:03:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:41.444   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:41.444  ************************************
00:29:41.444  START TEST nvme_e2edp
00:29:41.444  ************************************
00:29:41.444   00:03:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:29:41.703  NVMe Write/Read with End-to-End data protection test
00:29:41.703  Attached to 0000:00:06.0
00:29:41.703  Cleaning up...
00:29:41.703  
00:29:41.703  real	0m0.229s
00:29:41.703  user	0m0.076s
00:29:41.703  sys	0m0.090s
00:29:41.703   00:03:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:41.703   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:41.703  ************************************
00:29:41.703  END TEST nvme_e2edp
00:29:41.703  ************************************
00:29:41.703   00:03:12	-- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:29:41.703   00:03:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:41.703   00:03:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:41.703   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:41.703  ************************************
00:29:41.703  START TEST nvme_reserve
00:29:41.703  ************************************
00:29:41.703   00:03:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:29:41.962  =====================================================
00:29:41.962  NVMe Controller at PCI bus 0, device 6, function 0
00:29:41.962  =====================================================
00:29:41.962  Reservations:                Not Supported
00:29:41.962  Reservation test passed
00:29:41.962  
00:29:41.962  real	0m0.282s
00:29:41.962  user	0m0.091s
00:29:41.962  sys	0m0.130s
00:29:41.962   00:03:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:41.962   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:41.962  ************************************
00:29:41.962  END TEST nvme_reserve
00:29:41.962  ************************************
00:29:41.962   00:03:12	-- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:29:41.962   00:03:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:41.962   00:03:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:41.962   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:41.962  ************************************
00:29:41.962  START TEST nvme_err_injection
00:29:41.962  ************************************
00:29:41.962   00:03:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:29:42.221  NVMe Error Injection test
00:29:42.221  Attached to 0000:00:06.0
00:29:42.221  0000:00:06.0: get features failed as expected
00:29:42.221  0000:00:06.0: get features successfully as expected
00:29:42.221  0000:00:06.0: read failed as expected
00:29:42.221  0000:00:06.0: read successfully as expected
00:29:42.221  Cleaning up...
00:29:42.479  
00:29:42.479  real	0m0.324s
00:29:42.479  user	0m0.135s
00:29:42.479  sys	0m0.102s
00:29:42.479   00:03:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:42.479   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:42.479  ************************************
00:29:42.479  END TEST nvme_err_injection
00:29:42.479  ************************************
00:29:42.479   00:03:12	-- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:29:42.479   00:03:12	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:29:42.479   00:03:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:42.479   00:03:12	-- common/autotest_common.sh@10 -- # set +x
00:29:42.479  ************************************
00:29:42.479  START TEST nvme_overhead
00:29:42.479  ************************************
00:29:42.479   00:03:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:29:43.858  Initializing NVMe Controllers
00:29:43.858  Attached to 0000:00:06.0
00:29:43.858  Initialization complete. Launching workers.
00:29:43.858  submit (in ns)   avg, min, max =  15076.1,  12312.7,  48576.4
00:29:43.858  complete (in ns) avg, min, max =   9082.0,   7419.1,  54449.1
00:29:43.858  
00:29:43.858  Submit histogram
00:29:43.858  ================
00:29:43.858         Range in us     Cumulative     Count
00:29:43.858     12.276 -    12.335:    0.0370%  (        3)
00:29:43.858     12.335 -    12.393:    0.4930%  (       37)
00:29:43.858     12.393 -    12.451:    1.6638%  (       95)
00:29:43.858     12.451 -    12.509:    3.0441%  (      112)
00:29:43.858     12.509 -    12.567:    4.4737%  (      116)
00:29:43.858     12.567 -    12.625:    7.0495%  (      209)
00:29:43.858     12.625 -    12.684:   12.4846%  (      441)
00:29:43.858     12.684 -    12.742:   20.8528%  (      679)
00:29:43.858     12.742 -    12.800:   30.5521%  (      787)
00:29:43.858     12.800 -    12.858:   38.9820%  (      684)
00:29:43.858     12.858 -    12.916:   45.5879%  (      536)
00:29:43.858     12.916 -    12.975:   51.6022%  (      488)
00:29:43.858     12.975 -    13.033:   56.9509%  (      434)
00:29:43.858     13.033 -    13.091:   62.0286%  (      412)
00:29:43.858     13.091 -    13.149:   66.6256%  (      373)
00:29:43.858     13.149 -    13.207:   70.3845%  (      305)
00:29:43.858     13.207 -    13.265:   73.3054%  (      237)
00:29:43.858     13.265 -    13.324:   75.5238%  (      180)
00:29:43.858     13.324 -    13.382:   77.4217%  (      154)
00:29:43.858     13.382 -    13.440:   79.1225%  (      138)
00:29:43.858     13.440 -    13.498:   80.4659%  (      109)
00:29:43.858     13.498 -    13.556:   81.5751%  (       90)
00:29:43.858     13.556 -    13.615:   82.4378%  (       70)
00:29:43.858     13.615 -    13.673:   82.8814%  (       36)
00:29:43.858     13.673 -    13.731:   83.2265%  (       28)
00:29:43.858     13.731 -    13.789:   83.4853%  (       21)
00:29:43.858     13.789 -    13.847:   83.6579%  (       14)
00:29:43.858     13.847 -    13.905:   83.8058%  (       12)
00:29:43.858     13.905 -    13.964:   84.0399%  (       19)
00:29:43.858     13.964 -    14.022:   84.1632%  (       10)
00:29:43.858     14.022 -    14.080:   84.2987%  (       11)
00:29:43.858     14.080 -    14.138:   84.3480%  (        4)
00:29:43.858     14.138 -    14.196:   84.4466%  (        8)
00:29:43.858     14.196 -    14.255:   84.5083%  (        5)
00:29:43.858     14.255 -    14.313:   84.5452%  (        3)
00:29:43.858     14.313 -    14.371:   84.5822%  (        3)
00:29:43.858     14.371 -    14.429:   84.6561%  (        6)
00:29:43.858     14.429 -    14.487:   84.6808%  (        2)
00:29:43.858     14.487 -    14.545:   84.7054%  (        2)
00:29:43.858     14.604 -    14.662:   84.7547%  (        4)
00:29:43.858     14.662 -    14.720:   84.7794%  (        2)
00:29:43.858     14.720 -    14.778:   84.8040%  (        2)
00:29:43.858     14.778 -    14.836:   84.8287%  (        2)
00:29:43.858     14.895 -    15.011:   84.8780%  (        4)
00:29:43.858     15.011 -    15.127:   84.9026%  (        2)
00:29:43.858     15.127 -    15.244:   84.9150%  (        1)
00:29:43.858     15.244 -    15.360:   84.9396%  (        2)
00:29:43.858     15.360 -    15.476:   84.9766%  (        3)
00:29:43.858     15.476 -    15.593:   84.9889%  (        1)
00:29:43.858     15.593 -    15.709:   85.0012%  (        1)
00:29:43.858     15.709 -    15.825:   85.0382%  (        3)
00:29:43.858     15.942 -    16.058:   85.0505%  (        1)
00:29:43.858     16.058 -    16.175:   85.1245%  (        6)
00:29:43.858     16.175 -    16.291:   85.1491%  (        2)
00:29:43.858     16.291 -    16.407:   85.1738%  (        2)
00:29:43.858     16.407 -    16.524:   85.1861%  (        1)
00:29:43.858     16.524 -    16.640:   85.2107%  (        2)
00:29:43.858     16.640 -    16.756:   85.2354%  (        2)
00:29:43.858     16.756 -    16.873:   85.2477%  (        1)
00:29:43.858     16.873 -    16.989:   85.2724%  (        2)
00:29:43.858     17.105 -    17.222:   85.2847%  (        1)
00:29:43.858     17.222 -    17.338:   85.3340%  (        4)
00:29:43.858     17.338 -    17.455:   85.3463%  (        1)
00:29:43.858     17.455 -    17.571:   85.3586%  (        1)
00:29:43.858     17.571 -    17.687:   85.4203%  (        5)
00:29:43.858     17.687 -    17.804:   85.4819%  (        5)
00:29:43.858     17.804 -    17.920:   85.5189%  (        3)
00:29:43.858     17.920 -    18.036:   85.5435%  (        2)
00:29:43.858     18.036 -    18.153:   85.5682%  (        2)
00:29:43.858     18.153 -    18.269:   85.6298%  (        5)
00:29:43.858     18.269 -    18.385:   85.6667%  (        3)
00:29:43.858     18.385 -    18.502:   85.6914%  (        2)
00:29:43.858     18.502 -    18.618:   85.7284%  (        3)
00:29:43.858     18.618 -    18.735:   85.7900%  (        5)
00:29:43.858     18.735 -    18.851:   85.8270%  (        3)
00:29:43.858     18.851 -    18.967:   85.8516%  (        2)
00:29:43.858     18.967 -    19.084:   85.8763%  (        2)
00:29:43.858     19.084 -    19.200:   85.9256%  (        4)
00:29:43.858     19.200 -    19.316:   85.9502%  (        2)
00:29:43.858     19.316 -    19.433:   85.9749%  (        2)
00:29:43.858     19.433 -    19.549:   86.0118%  (        3)
00:29:43.858     19.549 -    19.665:   86.0365%  (        2)
00:29:43.858     19.665 -    19.782:   86.0611%  (        2)
00:29:43.858     19.782 -    19.898:   86.0858%  (        2)
00:29:43.858     19.898 -    20.015:   86.1228%  (        3)
00:29:43.858     20.015 -    20.131:   86.1351%  (        1)
00:29:43.858     20.131 -    20.247:   86.1597%  (        2)
00:29:43.858     20.247 -    20.364:   86.1844%  (        2)
00:29:43.858     20.364 -    20.480:   86.1967%  (        1)
00:29:43.858     20.480 -    20.596:   86.2337%  (        3)
00:29:43.858     20.596 -    20.713:   86.2706%  (        3)
00:29:43.858     20.713 -    20.829:   86.3692%  (        8)
00:29:43.858     21.062 -    21.178:   86.3939%  (        2)
00:29:43.858     21.178 -    21.295:   86.4185%  (        2)
00:29:43.858     21.411 -    21.527:   86.4309%  (        1)
00:29:43.858     21.644 -    21.760:   86.4555%  (        2)
00:29:43.858     21.760 -    21.876:   86.4802%  (        2)
00:29:43.858     21.876 -    21.993:   86.5295%  (        4)
00:29:43.858     21.993 -    22.109:   86.5788%  (        4)
00:29:43.858     22.109 -    22.225:   86.6034%  (        2)
00:29:43.858     22.225 -    22.342:   86.6157%  (        1)
00:29:43.858     22.342 -    22.458:   86.6527%  (        3)
00:29:43.858     22.458 -    22.575:   86.6650%  (        1)
00:29:43.858     22.807 -    22.924:   86.7020%  (        3)
00:29:43.858     23.040 -    23.156:   86.7143%  (        1)
00:29:43.858     23.156 -    23.273:   86.7390%  (        2)
00:29:43.858     23.273 -    23.389:   86.7636%  (        2)
00:29:43.858     23.389 -    23.505:   86.8006%  (        3)
00:29:43.858     23.738 -    23.855:   86.8252%  (        2)
00:29:43.858     23.855 -    23.971:   86.8622%  (        3)
00:29:43.858     23.971 -    24.087:   86.8869%  (        2)
00:29:43.858     24.320 -    24.436:   86.8992%  (        1)
00:29:43.858     24.436 -    24.553:   86.9238%  (        2)
00:29:43.858     24.553 -    24.669:   86.9485%  (        2)
00:29:43.858     24.902 -    25.018:   86.9608%  (        1)
00:29:43.858     25.018 -    25.135:   86.9855%  (        2)
00:29:43.858     26.065 -    26.182:   86.9978%  (        1)
00:29:43.858     26.415 -    26.531:   87.0101%  (        1)
00:29:43.858     26.531 -    26.647:   87.0224%  (        1)
00:29:43.858     26.880 -    26.996:   87.0841%  (        5)
00:29:43.858     26.996 -    27.113:   87.2073%  (       10)
00:29:43.858     27.113 -    27.229:   87.5401%  (       27)
00:29:43.858     27.229 -    27.345:   88.3042%  (       62)
00:29:43.858     27.345 -    27.462:   89.4134%  (       90)
00:29:43.858     27.462 -    27.578:   90.6458%  (      100)
00:29:43.858     27.578 -    27.695:   91.7920%  (       93)
00:29:43.858     27.695 -    27.811:   92.9012%  (       90)
00:29:43.858     27.811 -    27.927:   93.5913%  (       56)
00:29:43.858     27.927 -    28.044:   93.9734%  (       31)
00:29:43.858     28.044 -    28.160:   94.6635%  (       56)
00:29:43.858     28.160 -    28.276:   95.5016%  (       68)
00:29:43.858     28.276 -    28.393:   96.4506%  (       77)
00:29:43.858     28.393 -    28.509:   97.3256%  (       71)
00:29:43.858     28.509 -    28.625:   98.1513%  (       67)
00:29:43.858     28.625 -    28.742:   98.6443%  (       40)
00:29:43.858     28.742 -    28.858:   99.0757%  (       35)
00:29:43.858     28.858 -    28.975:   99.2359%  (       13)
00:29:43.858     28.975 -    29.091:   99.2975%  (        5)
00:29:43.858     29.091 -    29.207:   99.3345%  (        3)
00:29:43.858     29.207 -    29.324:   99.3715%  (        3)
00:29:43.858     29.324 -    29.440:   99.4208%  (        4)
00:29:43.858     29.440 -    29.556:   99.4701%  (        4)
00:29:43.858     29.556 -    29.673:   99.4824%  (        1)
00:29:43.858     29.673 -    29.789:   99.5070%  (        2)
00:29:43.858     29.789 -    30.022:   99.5440%  (        3)
00:29:43.858     30.022 -    30.255:   99.5810%  (        3)
00:29:43.858     30.255 -    30.487:   99.5933%  (        1)
00:29:43.858     30.487 -    30.720:   99.6179%  (        2)
00:29:43.858     30.720 -    30.953:   99.6303%  (        1)
00:29:43.858     31.185 -    31.418:   99.6549%  (        2)
00:29:43.858     31.651 -    31.884:   99.6672%  (        1)
00:29:43.858     31.884 -    32.116:   99.6796%  (        1)
00:29:43.858     32.349 -    32.582:   99.7289%  (        4)
00:29:43.858     32.582 -    32.815:   99.7535%  (        2)
00:29:43.858     32.815 -    33.047:   99.7658%  (        1)
00:29:43.858     33.047 -    33.280:   99.8151%  (        4)
00:29:43.858     33.280 -    33.513:   99.8521%  (        3)
00:29:43.858     33.745 -    33.978:   99.8644%  (        1)
00:29:43.858     33.978 -    34.211:   99.8768%  (        1)
00:29:43.858     34.211 -    34.444:   99.8891%  (        1)
00:29:43.858     37.004 -    37.236:   99.9014%  (        1)
00:29:43.858     37.469 -    37.702:   99.9137%  (        1)
00:29:43.858     39.796 -    40.029:   99.9261%  (        1)
00:29:43.858     41.193 -    41.425:   99.9384%  (        1)
00:29:43.858     43.287 -    43.520:   99.9507%  (        1)
00:29:43.858     44.451 -    44.684:   99.9630%  (        1)
00:29:43.858     46.778 -    47.011:   99.9754%  (        1)
00:29:43.858     47.011 -    47.244:   99.9877%  (        1)
00:29:43.858     48.407 -    48.640:  100.0000%  (        1)
00:29:43.858  
00:29:43.858  Complete histogram
00:29:43.858  ==================
00:29:43.858         Range in us     Cumulative     Count
00:29:43.858      7.418 -     7.447:    0.0370%  (        3)
00:29:43.858      7.447 -     7.505:    0.6285%  (       48)
00:29:43.858      7.505 -     7.564:    1.3434%  (       58)
00:29:43.858      7.564 -     7.622:    2.2800%  (       76)
00:29:43.859      7.622 -     7.680:    9.4528%  (      582)
00:29:43.859      7.680 -     7.738:   20.9391%  (      932)
00:29:43.859      7.738 -     7.796:   27.3108%  (      517)
00:29:43.859      7.796 -     7.855:   30.4659%  (      256)
00:29:43.859      7.855 -     7.913:   36.5171%  (      491)
00:29:43.859      7.913 -     7.971:   52.1444%  (     1268)
00:29:43.859      7.971 -     8.029:   59.3665%  (      586)
00:29:43.859      8.029 -     8.087:   61.8684%  (      203)
00:29:43.859      8.087 -     8.145:   67.0939%  (      424)
00:29:43.859      8.145 -     8.204:   77.1260%  (      814)
00:29:43.859      8.204 -     8.262:   81.6860%  (      370)
00:29:43.859      8.262 -     8.320:   83.0293%  (      109)
00:29:43.859      8.320 -     8.378:   85.1861%  (      175)
00:29:43.859      8.378 -     8.436:   88.2056%  (      245)
00:29:43.859      8.436 -     8.495:   90.2391%  (      165)
00:29:43.859      8.495 -     8.553:   90.7074%  (       38)
00:29:43.859      8.553 -     8.611:   91.0032%  (       24)
00:29:43.859      8.611 -     8.669:   91.2004%  (       16)
00:29:43.859      8.669 -     8.727:   91.3606%  (       13)
00:29:43.859      8.727 -     8.785:   91.4592%  (        8)
00:29:43.859      8.785 -     8.844:   91.5085%  (        4)
00:29:43.859      8.844 -     8.902:   91.5825%  (        6)
00:29:43.859      8.902 -     8.960:   91.6317%  (        4)
00:29:43.859      8.960 -     9.018:   91.6441%  (        1)
00:29:43.859      9.135 -     9.193:   91.6564%  (        1)
00:29:43.859      9.193 -     9.251:   91.6687%  (        1)
00:29:43.859      9.309 -     9.367:   91.7057%  (        3)
00:29:43.859      9.367 -     9.425:   91.7180%  (        1)
00:29:43.859      9.484 -     9.542:   91.7303%  (        1)
00:29:43.859      9.542 -     9.600:   91.7550%  (        2)
00:29:43.859      9.658 -     9.716:   91.7673%  (        1)
00:29:43.859      9.716 -     9.775:   91.7796%  (        1)
00:29:43.859      9.949 -    10.007:   91.7920%  (        1)
00:29:43.859     10.124 -    10.182:   91.8043%  (        1)
00:29:43.859     10.240 -    10.298:   91.8289%  (        2)
00:29:43.859     10.298 -    10.356:   91.8413%  (        1)
00:29:43.859     10.473 -    10.531:   91.8536%  (        1)
00:29:43.859     10.589 -    10.647:   91.8659%  (        1)
00:29:43.859     10.705 -    10.764:   91.8782%  (        1)
00:29:43.859     11.171 -    11.229:   91.8906%  (        1)
00:29:43.859     11.462 -    11.520:   91.9029%  (        1)
00:29:43.859     11.578 -    11.636:   91.9152%  (        1)
00:29:43.859     11.869 -    11.927:   91.9275%  (        1)
00:29:43.859     11.927 -    11.985:   91.9522%  (        2)
00:29:43.859     11.985 -    12.044:   91.9645%  (        1)
00:29:43.859     12.044 -    12.102:   91.9768%  (        1)
00:29:43.859     12.102 -    12.160:   92.0015%  (        2)
00:29:43.859     12.218 -    12.276:   92.0138%  (        1)
00:29:43.859     12.276 -    12.335:   92.0261%  (        1)
00:29:43.859     12.335 -    12.393:   92.0385%  (        1)
00:29:43.859     12.393 -    12.451:   92.0508%  (        1)
00:29:43.859     12.451 -    12.509:   92.0877%  (        3)
00:29:43.859     12.509 -    12.567:   92.1124%  (        2)
00:29:43.859     12.625 -    12.684:   92.1370%  (        2)
00:29:43.859     12.684 -    12.742:   92.1494%  (        1)
00:29:43.859     12.742 -    12.800:   92.1740%  (        2)
00:29:43.859     12.800 -    12.858:   92.1987%  (        2)
00:29:43.859     12.858 -    12.916:   92.2233%  (        2)
00:29:43.859     12.916 -    12.975:   92.2480%  (        2)
00:29:43.859     13.033 -    13.091:   92.2726%  (        2)
00:29:43.859     13.091 -    13.149:   92.3219%  (        4)
00:29:43.859     13.149 -    13.207:   92.3589%  (        3)
00:29:43.859     13.207 -    13.265:   92.4328%  (        6)
00:29:43.859     13.265 -    13.324:   92.4575%  (        2)
00:29:43.859     13.324 -    13.382:   92.4821%  (        2)
00:29:43.859     13.382 -    13.440:   92.5314%  (        4)
00:29:43.859     13.440 -    13.498:   92.5561%  (        2)
00:29:43.859     13.498 -    13.556:   92.5930%  (        3)
00:29:43.859     13.556 -    13.615:   92.6054%  (        1)
00:29:43.859     13.615 -    13.673:   92.6423%  (        3)
00:29:43.859     13.673 -    13.731:   92.6670%  (        2)
00:29:43.859     13.731 -    13.789:   92.6916%  (        2)
00:29:43.859     13.789 -    13.847:   92.7163%  (        2)
00:29:43.859     13.847 -    13.905:   92.7286%  (        1)
00:29:43.859     13.905 -    13.964:   92.7533%  (        2)
00:29:43.859     14.022 -    14.080:   92.7656%  (        1)
00:29:43.859     14.080 -    14.138:   92.7902%  (        2)
00:29:43.859     14.255 -    14.313:   92.8272%  (        3)
00:29:43.859     14.313 -    14.371:   92.8395%  (        1)
00:29:43.859     14.371 -    14.429:   92.8765%  (        3)
00:29:43.859     14.429 -    14.487:   92.8888%  (        1)
00:29:43.859     14.720 -    14.778:   92.9135%  (        2)
00:29:43.859     14.836 -    14.895:   92.9258%  (        1)
00:29:43.859     14.895 -    15.011:   92.9505%  (        2)
00:29:43.859     15.709 -    15.825:   92.9628%  (        1)
00:29:43.859     15.825 -    15.942:   92.9751%  (        1)
00:29:43.859     16.058 -    16.175:   93.0121%  (        3)
00:29:43.859     16.291 -    16.407:   93.0244%  (        1)
00:29:43.859     16.407 -    16.524:   93.0367%  (        1)
00:29:43.859     16.524 -    16.640:   93.0491%  (        1)
00:29:43.859     16.756 -    16.873:   93.0737%  (        2)
00:29:43.859     17.455 -    17.571:   93.0860%  (        1)
00:29:43.859     17.571 -    17.687:   93.1107%  (        2)
00:29:43.859     17.687 -    17.804:   93.1230%  (        1)
00:29:43.859     18.036 -    18.153:   93.1353%  (        1)
00:29:43.859     18.153 -    18.269:   93.1476%  (        1)
00:29:43.859     18.502 -    18.618:   93.1600%  (        1)
00:29:43.859     18.618 -    18.735:   93.1969%  (        3)
00:29:43.859     19.084 -    19.200:   93.2216%  (        2)
00:29:43.859     19.200 -    19.316:   93.2339%  (        1)
00:29:43.859     20.480 -    20.596:   93.2462%  (        1)
00:29:43.859     20.945 -    21.062:   93.2586%  (        1)
00:29:43.859     21.876 -    21.993:   93.3202%  (        5)
00:29:43.859     21.993 -    22.109:   93.5051%  (       15)
00:29:43.859     22.109 -    22.225:   93.8255%  (       26)
00:29:43.859     22.225 -    22.342:   94.4787%  (       53)
00:29:43.859     22.342 -    22.458:   95.1935%  (       58)
00:29:43.859     22.458 -    22.575:   95.8467%  (       53)
00:29:43.859     22.575 -    22.691:   96.4752%  (       51)
00:29:43.859     22.691 -    22.807:   96.8573%  (       31)
00:29:43.859     22.807 -    22.924:   97.1284%  (       22)
00:29:43.859     22.924 -    23.040:   97.2886%  (       13)
00:29:43.859     23.040 -    23.156:   97.7200%  (       35)
00:29:43.859     23.156 -    23.273:   98.1637%  (       36)
00:29:43.859     23.273 -    23.389:   98.6690%  (       41)
00:29:43.859     23.389 -    23.505:   99.0140%  (       28)
00:29:43.859     23.505 -    23.622:   99.3591%  (       28)
00:29:43.859     23.622 -    23.738:   99.6426%  (       23)
00:29:43.859     23.738 -    23.855:   99.7412%  (        8)
00:29:43.859     23.855 -    23.971:   99.7782%  (        3)
00:29:43.859     23.971 -    24.087:   99.8028%  (        2)
00:29:43.859     24.320 -    24.436:   99.8151%  (        1)
00:29:43.859     26.531 -    26.647:   99.8275%  (        1)
00:29:43.859     26.764 -    26.880:   99.8398%  (        1)
00:29:43.859     27.345 -    27.462:   99.8644%  (        2)
00:29:43.859     27.462 -    27.578:   99.8768%  (        1)
00:29:43.859     28.160 -    28.276:   99.8891%  (        1)
00:29:43.859     28.858 -    28.975:   99.9014%  (        1)
00:29:43.859     30.953 -    31.185:   99.9261%  (        2)
00:29:43.859     33.978 -    34.211:   99.9384%  (        1)
00:29:43.859     36.538 -    36.771:   99.9507%  (        1)
00:29:43.859     38.633 -    38.865:   99.9630%  (        1)
00:29:43.859     39.796 -    40.029:   99.9754%  (        1)
00:29:43.859     40.262 -    40.495:   99.9877%  (        1)
00:29:43.859     54.225 -    54.458:  100.0000%  (        1)
00:29:43.859  
00:29:43.859  
00:29:43.859  real	0m1.267s
00:29:43.859  user	0m1.089s
00:29:43.859  sys	0m0.109s
00:29:43.859   00:03:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:43.859   00:03:14	-- common/autotest_common.sh@10 -- # set +x
00:29:43.859  ************************************
00:29:43.859  END TEST nvme_overhead
00:29:43.859  ************************************
00:29:43.859   00:03:14	-- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:29:43.859   00:03:14	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:29:43.859   00:03:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:43.859   00:03:14	-- common/autotest_common.sh@10 -- # set +x
00:29:43.859  ************************************
00:29:43.859  START TEST nvme_arbitration
00:29:43.859  ************************************
00:29:43.859   00:03:14	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:29:47.149  Initializing NVMe Controllers
00:29:47.149  Attached to 0000:00:06.0
00:29:47.149  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:29:47.149  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:29:47.149  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:29:47.149  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:29:47.149  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:29:47.149  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:29:47.149  Initialization complete. Launching workers.
00:29:47.149  Starting thread on core 1 with urgent priority queue
00:29:47.149  Starting thread on core 2 with urgent priority queue
00:29:47.149  Starting thread on core 0 with urgent priority queue
00:29:47.149  Starting thread on core 3 with urgent priority queue
00:29:47.149  QEMU NVMe Ctrl       (12340               ) core 0:  1728.00 IO/s    57.87 secs/100000 ios
00:29:47.149  QEMU NVMe Ctrl       (12340               ) core 1:   789.33 IO/s   126.69 secs/100000 ios
00:29:47.149  QEMU NVMe Ctrl       (12340               ) core 2:   320.00 IO/s   312.50 secs/100000 ios
00:29:47.149  QEMU NVMe Ctrl       (12340               ) core 3:  1173.33 IO/s    85.23 secs/100000 ios
00:29:47.149  ========================================================
00:29:47.149  
00:29:47.149  
00:29:47.149  real	0m3.461s
00:29:47.149  user	0m9.583s
00:29:47.149  sys	0m0.124s
00:29:47.149   00:03:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:47.149   00:03:17	-- common/autotest_common.sh@10 -- # set +x
00:29:47.149  ************************************
00:29:47.149  END TEST nvme_arbitration
00:29:47.149  ************************************
00:29:47.149   00:03:17	-- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log
00:29:47.149   00:03:17	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:29:47.149   00:03:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:47.149   00:03:17	-- common/autotest_common.sh@10 -- # set +x
00:29:47.149  ************************************
00:29:47.149  START TEST nvme_single_aen
00:29:47.149  ************************************
00:29:47.149   00:03:17	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log
00:29:47.149  [2024-12-14 00:03:17.879789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:47.149  [2024-12-14 00:03:17.879931] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:47.408  [2024-12-14 00:03:18.075512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:29:47.408  Asynchronous Event Request test
00:29:47.408  Attached to 0000:00:06.0
00:29:47.408  Reset controller to setup AER completions for this process
00:29:47.408  Registering asynchronous event callbacks...
00:29:47.408  Getting orig temperature thresholds of all controllers
00:29:47.408  0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:29:47.408  Setting all controllers temperature threshold low to trigger AER
00:29:47.408  Waiting for all controllers temperature threshold to be set lower
00:29:47.408  0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:29:47.408  aer_cb - Resetting Temp Threshold for device: 0000:00:06.0
00:29:47.408  Waiting for all controllers to trigger AER and reset threshold
00:29:47.408  0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:29:47.408  Cleaning up...
00:29:47.408  
00:29:47.408  real	0m0.293s
00:29:47.408  user	0m0.104s
00:29:47.408  sys	0m0.125s
00:29:47.409   00:03:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:47.409   00:03:18	-- common/autotest_common.sh@10 -- # set +x
00:29:47.409  ************************************
00:29:47.409  END TEST nvme_single_aen
00:29:47.409  ************************************
00:29:47.668   00:03:18	-- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:29:47.668   00:03:18	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:47.668   00:03:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:47.668   00:03:18	-- common/autotest_common.sh@10 -- # set +x
00:29:47.668  ************************************
00:29:47.668  START TEST nvme_doorbell_aers
00:29:47.668  ************************************
00:29:47.668   00:03:18	-- common/autotest_common.sh@1114 -- # nvme_doorbell_aers
00:29:47.668   00:03:18	-- nvme/nvme.sh@70 -- # bdfs=()
00:29:47.668   00:03:18	-- nvme/nvme.sh@70 -- # local bdfs bdf
00:29:47.668   00:03:18	-- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:29:47.668    00:03:18	-- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:29:47.668    00:03:18	-- common/autotest_common.sh@1508 -- # bdfs=()
00:29:47.668    00:03:18	-- common/autotest_common.sh@1508 -- # local bdfs
00:29:47.668    00:03:18	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:29:47.668     00:03:18	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:29:47.668     00:03:18	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:29:47.668    00:03:18	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:29:47.668    00:03:18	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:29:47.668   00:03:18	-- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:29:47.668   00:03:18	-- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0'
00:29:47.927  [2024-12-14 00:03:18.457517] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139396) is not found. Dropping the request.
00:29:57.904  Executing: test_write_invalid_db
00:29:57.904  Waiting for AER completion...
00:29:57.904  Failure: test_write_invalid_db
00:29:57.904  
00:29:57.904  Executing: test_invalid_db_write_overflow_sq
00:29:57.904  Waiting for AER completion...
00:29:57.904  Failure: test_invalid_db_write_overflow_sq
00:29:57.904  
00:29:57.904  Executing: test_invalid_db_write_overflow_cq
00:29:57.904  Waiting for AER completion...
00:29:57.904  Failure: test_invalid_db_write_overflow_cq
00:29:57.904  
00:29:57.904  
00:29:57.904  real	0m10.107s
00:29:57.904  user	0m8.617s
00:29:57.904  sys	0m1.448s
00:29:57.904   00:03:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:57.904   00:03:28	-- common/autotest_common.sh@10 -- # set +x
00:29:57.904  ************************************
00:29:57.904  END TEST nvme_doorbell_aers
00:29:57.904  ************************************
00:29:57.904    00:03:28	-- nvme/nvme.sh@97 -- # uname
00:29:57.904   00:03:28	-- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:29:57.904   00:03:28	-- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:29:57.904   00:03:28	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:29:57.904   00:03:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:57.904   00:03:28	-- common/autotest_common.sh@10 -- # set +x
00:29:57.904  ************************************
00:29:57.904  START TEST nvme_multi_aen
00:29:57.904  ************************************
00:29:57.904   00:03:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:29:57.904  [2024-12-14 00:03:28.365261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:57.904  [2024-12-14 00:03:28.365499] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:57.904  [2024-12-14 00:03:28.520154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:29:57.904  [2024-12-14 00:03:28.520345] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139396) is not found. Dropping the request.
00:29:57.904  [2024-12-14 00:03:28.520522] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139396) is not found. Dropping the request.
00:29:57.904  [2024-12-14 00:03:28.520633] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139396) is not found. Dropping the request.
00:29:57.904  [2024-12-14 00:03:28.526740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:29:57.904  [2024-12-14 00:03:28.526963] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:57.904  Child process pid: 139590
00:29:58.163  [Child] Asynchronous Event Request test
00:29:58.163  [Child] Attached to 0000:00:06.0
00:29:58.163  [Child] Registering asynchronous event callbacks...
00:29:58.163  [Child] Getting orig temperature thresholds of all controllers
00:29:58.163  [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:29:58.163  [Child] Waiting for all controllers to trigger AER and reset threshold
00:29:58.163  [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:29:58.163  [Child] 0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:29:58.163  [Child] Cleaning up...
00:29:58.163  Asynchronous Event Request test
00:29:58.163  Attached to 0000:00:06.0
00:29:58.163  Reset controller to setup AER completions for this process
00:29:58.163  Registering asynchronous event callbacks...
00:29:58.163  Getting orig temperature thresholds of all controllers
00:29:58.163  0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:29:58.163  Setting all controllers temperature threshold low to trigger AER
00:29:58.163  Waiting for all controllers temperature threshold to be set lower
00:29:58.163  0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:29:58.163  aer_cb - Resetting Temp Threshold for device: 0000:00:06.0
00:29:58.163  Waiting for all controllers to trigger AER and reset threshold
00:29:58.163  0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:29:58.163  Cleaning up...
00:29:58.163  
00:29:58.163  real	0m0.534s
00:29:58.163  user	0m0.152s
00:29:58.163  sys	0m0.226s
00:29:58.163   00:03:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:58.163   00:03:28	-- common/autotest_common.sh@10 -- # set +x
00:29:58.163  ************************************
00:29:58.163  END TEST nvme_multi_aen
00:29:58.163  ************************************
00:29:58.422   00:03:28	-- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:29:58.422   00:03:28	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:29:58.422   00:03:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:58.422   00:03:28	-- common/autotest_common.sh@10 -- # set +x
00:29:58.422  ************************************
00:29:58.422  START TEST nvme_startup
00:29:58.422  ************************************
00:29:58.422   00:03:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:29:58.681  Initializing NVMe Controllers
00:29:58.681  Attached to 0000:00:06.0
00:29:58.681  Initialization complete.
00:29:58.681  Time used:193631.859      (us).
00:29:58.681  
00:29:58.681  real	0m0.286s
00:29:58.681  user	0m0.115s
00:29:58.681  sys	0m0.112s
00:29:58.681   00:03:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:29:58.681   00:03:29	-- common/autotest_common.sh@10 -- # set +x
00:29:58.681  ************************************
00:29:58.681  END TEST nvme_startup
00:29:58.681  ************************************
00:29:58.681   00:03:29	-- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:29:58.681   00:03:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:29:58.681   00:03:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:29:58.681   00:03:29	-- common/autotest_common.sh@10 -- # set +x
00:29:58.681  ************************************
00:29:58.681  START TEST nvme_multi_secondary
00:29:58.681  ************************************
00:29:58.681   00:03:29	-- common/autotest_common.sh@1114 -- # nvme_multi_secondary
00:29:58.681   00:03:29	-- nvme/nvme.sh@52 -- # pid0=139648
00:29:58.681   00:03:29	-- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:29:58.681   00:03:29	-- nvme/nvme.sh@54 -- # pid1=139649
00:29:58.681   00:03:29	-- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:29:58.681   00:03:29	-- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:30:02.867  Initializing NVMe Controllers
00:30:02.867  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:02.867  Associating PCIE (0000:00:06.0) NSID 1 with lcore 1
00:30:02.867  Initialization complete. Launching workers.
00:30:02.867  ========================================================
00:30:02.867                                                                             Latency(us)
00:30:02.867  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:02.867  PCIE (0000:00:06.0) NSID 1 from core  1:   32670.67     127.62     489.39     126.34    4961.55
00:30:02.867  ========================================================
00:30:02.867  Total                                  :   32670.67     127.62     489.39     126.34    4961.55
00:30:02.867  
00:30:02.867  Initializing NVMe Controllers
00:30:02.867  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:02.867  Associating PCIE (0000:00:06.0) NSID 1 with lcore 2
00:30:02.867  Initialization complete. Launching workers.
00:30:02.867  ========================================================
00:30:02.867                                                                             Latency(us)
00:30:02.867  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:02.867  PCIE (0000:00:06.0) NSID 1 from core  2:   13300.33      51.95    1202.25     145.88   25032.43
00:30:02.867  ========================================================
00:30:02.867  Total                                  :   13300.33      51.95    1202.25     145.88   25032.43
00:30:02.867  
00:30:02.867   00:03:32	-- nvme/nvme.sh@56 -- # wait 139648
00:30:03.803  Initializing NVMe Controllers
00:30:03.803  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:03.803  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:30:03.803  Initialization complete. Launching workers.
00:30:03.803  ========================================================
00:30:03.803                                                                             Latency(us)
00:30:03.803  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:03.803  PCIE (0000:00:06.0) NSID 1 from core  0:   41418.82     161.79     385.96     120.15    1305.03
00:30:03.803  ========================================================
00:30:03.803  Total                                  :   41418.82     161.79     385.96     120.15    1305.03
00:30:03.803  
00:30:03.803   00:03:34	-- nvme/nvme.sh@57 -- # wait 139649
00:30:03.803   00:03:34	-- nvme/nvme.sh@61 -- # pid0=139725
00:30:03.803   00:03:34	-- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:30:03.803   00:03:34	-- nvme/nvme.sh@63 -- # pid1=139726
00:30:03.803   00:03:34	-- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:30:03.803   00:03:34	-- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:30:07.992  Initializing NVMe Controllers
00:30:07.992  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:07.992  Associating PCIE (0000:00:06.0) NSID 1 with lcore 1
00:30:07.992  Initialization complete. Launching workers.
00:30:07.992  ========================================================
00:30:07.992                                                                             Latency(us)
00:30:07.992  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:07.992  PCIE (0000:00:06.0) NSID 1 from core  1:   32000.27     125.00     499.68     135.25    3316.39
00:30:07.992  ========================================================
00:30:07.992  Total                                  :   32000.27     125.00     499.68     135.25    3316.39
00:30:07.992  
00:30:07.992  Initializing NVMe Controllers
00:30:07.992  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:07.992  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:30:07.992  Initialization complete. Launching workers.
00:30:07.992  ========================================================
00:30:07.992                                                                             Latency(us)
00:30:07.992  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:07.992  PCIE (0000:00:06.0) NSID 1 from core  0:   32030.67     125.12     499.15     121.13    1935.82
00:30:07.992  ========================================================
00:30:07.992  Total                                  :   32030.67     125.12     499.15     121.13    1935.82
00:30:07.992  
00:30:09.369  Initializing NVMe Controllers
00:30:09.369  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:30:09.369  Associating PCIE (0000:00:06.0) NSID 1 with lcore 2
00:30:09.369  Initialization complete. Launching workers.
00:30:09.369  ========================================================
00:30:09.369                                                                             Latency(us)
00:30:09.369  Device Information                     :       IOPS      MiB/s    Average        min        max
00:30:09.369  PCIE (0000:00:06.0) NSID 1 from core  2:   16806.80      65.65     951.17     131.86   20931.85
00:30:09.369  ========================================================
00:30:09.369  Total                                  :   16806.80      65.65     951.17     131.86   20931.85
00:30:09.369  
00:30:09.369  ************************************
00:30:09.369  END TEST nvme_multi_secondary
00:30:09.369  ************************************
00:30:09.369   00:03:39	-- nvme/nvme.sh@65 -- # wait 139725
00:30:09.369   00:03:39	-- nvme/nvme.sh@66 -- # wait 139726
00:30:09.369  
00:30:09.369  real	0m10.602s
00:30:09.369  user	0m18.696s
00:30:09.369  sys	0m0.779s
00:30:09.369   00:03:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:09.369   00:03:39	-- common/autotest_common.sh@10 -- # set +x
00:30:09.369   00:03:39	-- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:30:09.369   00:03:39	-- nvme/nvme.sh@102 -- # kill_stub
00:30:09.369   00:03:39	-- common/autotest_common.sh@1075 -- # [[ -e /proc/138961 ]]
00:30:09.369   00:03:39	-- common/autotest_common.sh@1076 -- # kill 138961
00:30:09.369   00:03:39	-- common/autotest_common.sh@1077 -- # wait 138961
00:30:09.937  [2024-12-14 00:03:40.536362] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139589) is not found. Dropping the request.
00:30:09.937  [2024-12-14 00:03:40.536808] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139589) is not found. Dropping the request.
00:30:09.937  [2024-12-14 00:03:40.537067] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139589) is not found. Dropping the request.
00:30:09.937  [2024-12-14 00:03:40.537307] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 139589) is not found. Dropping the request.
00:30:10.525   00:03:40	-- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0
00:30:10.525   00:03:40	-- common/autotest_common.sh@1083 -- # echo 2
00:30:10.525   00:03:40	-- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:30:10.525   00:03:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:30:10.525   00:03:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:10.525   00:03:40	-- common/autotest_common.sh@10 -- # set +x
00:30:10.525  ************************************
00:30:10.525  START TEST bdev_nvme_reset_stuck_adm_cmd
00:30:10.525  ************************************
00:30:10.525   00:03:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:30:10.525  * Looking for test storage...
00:30:10.525  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:30:10.525    00:03:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:30:10.525     00:03:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:30:10.525     00:03:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:30:10.525    00:03:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:30:10.525    00:03:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:30:10.525    00:03:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:30:10.525    00:03:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:30:10.525    00:03:41	-- scripts/common.sh@335 -- # IFS=.-:
00:30:10.525    00:03:41	-- scripts/common.sh@335 -- # read -ra ver1
00:30:10.525    00:03:41	-- scripts/common.sh@336 -- # IFS=.-:
00:30:10.525    00:03:41	-- scripts/common.sh@336 -- # read -ra ver2
00:30:10.525    00:03:41	-- scripts/common.sh@337 -- # local 'op=<'
00:30:10.525    00:03:41	-- scripts/common.sh@339 -- # ver1_l=2
00:30:10.525    00:03:41	-- scripts/common.sh@340 -- # ver2_l=1
00:30:10.525    00:03:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:30:10.525    00:03:41	-- scripts/common.sh@343 -- # case "$op" in
00:30:10.525    00:03:41	-- scripts/common.sh@344 -- # : 1
00:30:10.525    00:03:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:30:10.525    00:03:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:10.525     00:03:41	-- scripts/common.sh@364 -- # decimal 1
00:30:10.525     00:03:41	-- scripts/common.sh@352 -- # local d=1
00:30:10.525     00:03:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:10.525     00:03:41	-- scripts/common.sh@354 -- # echo 1
00:30:10.525    00:03:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:30:10.525     00:03:41	-- scripts/common.sh@365 -- # decimal 2
00:30:10.525     00:03:41	-- scripts/common.sh@352 -- # local d=2
00:30:10.525     00:03:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:10.525     00:03:41	-- scripts/common.sh@354 -- # echo 2
00:30:10.525    00:03:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:30:10.525    00:03:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:30:10.525    00:03:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:30:10.525    00:03:41	-- scripts/common.sh@367 -- # return 0
00:30:10.525    00:03:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:10.525    00:03:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:30:10.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:10.525  		--rc genhtml_branch_coverage=1
00:30:10.525  		--rc genhtml_function_coverage=1
00:30:10.525  		--rc genhtml_legend=1
00:30:10.525  		--rc geninfo_all_blocks=1
00:30:10.525  		--rc geninfo_unexecuted_blocks=1
00:30:10.525  		
00:30:10.525  		'
00:30:10.525    00:03:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:30:10.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:10.525  		--rc genhtml_branch_coverage=1
00:30:10.525  		--rc genhtml_function_coverage=1
00:30:10.525  		--rc genhtml_legend=1
00:30:10.525  		--rc geninfo_all_blocks=1
00:30:10.525  		--rc geninfo_unexecuted_blocks=1
00:30:10.525  		
00:30:10.525  		'
00:30:10.525    00:03:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:30:10.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:10.525  		--rc genhtml_branch_coverage=1
00:30:10.525  		--rc genhtml_function_coverage=1
00:30:10.525  		--rc genhtml_legend=1
00:30:10.525  		--rc geninfo_all_blocks=1
00:30:10.526  		--rc geninfo_unexecuted_blocks=1
00:30:10.526  		
00:30:10.526  		'
00:30:10.526    00:03:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:30:10.526  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:10.526  		--rc genhtml_branch_coverage=1
00:30:10.526  		--rc genhtml_function_coverage=1
00:30:10.526  		--rc genhtml_legend=1
00:30:10.526  		--rc geninfo_all_blocks=1
00:30:10.526  		--rc geninfo_unexecuted_blocks=1
00:30:10.526  		
00:30:10.526  		'
00:30:10.526   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:30:10.526   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:30:10.526   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:30:10.526   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:30:10.526   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:30:10.526    00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:30:10.526    00:03:41	-- common/autotest_common.sh@1519 -- # bdfs=()
00:30:10.526    00:03:41	-- common/autotest_common.sh@1519 -- # local bdfs
00:30:10.526    00:03:41	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:30:10.526     00:03:41	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:30:10.526     00:03:41	-- common/autotest_common.sh@1508 -- # bdfs=()
00:30:10.526     00:03:41	-- common/autotest_common.sh@1508 -- # local bdfs
00:30:10.526     00:03:41	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:30:10.526      00:03:41	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:30:10.526      00:03:41	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:30:10.830     00:03:41	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:30:10.830     00:03:41	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:30:10.830    00:03:41	-- common/autotest_common.sh@1522 -- # echo 0000:00:06.0
00:30:10.830   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0
00:30:10.830   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']'
00:30:10.830   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=139901
00:30:10.830   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:30:10.831   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 139901
00:30:10.831   00:03:41	-- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:30:10.831   00:03:41	-- common/autotest_common.sh@829 -- # '[' -z 139901 ']'
00:30:10.831   00:03:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:10.831   00:03:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:10.831  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:10.831   00:03:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:10.831   00:03:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:10.831   00:03:41	-- common/autotest_common.sh@10 -- # set +x
00:30:10.831  [2024-12-14 00:03:41.338649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:10.831  [2024-12-14 00:03:41.338845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139901 ]
00:30:11.095  [2024-12-14 00:03:41.550469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:30:11.095  [2024-12-14 00:03:41.801974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:30:11.095  [2024-12-14 00:03:41.802345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:30:11.095  [2024-12-14 00:03:41.802748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:30:11.095  [2024-12-14 00:03:41.803501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:11.095  [2024-12-14 00:03:41.803443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:30:12.478   00:03:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:12.478   00:03:42	-- common/autotest_common.sh@862 -- # return 0
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0
00:30:12.478   00:03:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:12.478   00:03:42	-- common/autotest_common.sh@10 -- # set +x
00:30:12.478  nvme0n1
00:30:12.478   00:03:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:12.478    00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_AgN7Q.txt
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:30:12.478   00:03:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:12.478   00:03:42	-- common/autotest_common.sh@10 -- # set +x
00:30:12.478  true
00:30:12.478   00:03:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:12.478    00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734134622
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=139928
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:30:12.478   00:03:42	-- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:30:14.381   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:30:14.381   00:03:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:14.381   00:03:44	-- common/autotest_common.sh@10 -- # set +x
00:30:14.381  [2024-12-14 00:03:44.959143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:30:14.381  [2024-12-14 00:03:44.959571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:30:14.381  [2024-12-14 00:03:44.959836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:30:14.381  [2024-12-14 00:03:44.959988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:30:14.381  [2024-12-14 00:03:44.962357] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:30:14.381   00:03:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:14.381   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 139928
00:30:14.381  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 139928
00:30:14.381   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 139928
00:30:14.381    00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:30:14.381   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:30:14.382   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:14.382   00:03:44	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:14.382   00:03:44	-- common/autotest_common.sh@10 -- # set +x
00:30:14.382   00:03:44	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:14.382   00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:30:14.382    00:03:44	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_AgN7Q.txt
00:30:14.382   00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:30:14.382     00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:30:14.382     00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:30:14.382      00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:30:14.382   00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:30:14.382      00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:30:14.382     00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:30:14.382     00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:30:14.382    00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:30:14.382   00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:30:14.382   00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_AgN7Q.txt
00:30:14.382   00:03:45	-- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 139901
00:30:14.382   00:03:45	-- common/autotest_common.sh@936 -- # '[' -z 139901 ']'
00:30:14.382   00:03:45	-- common/autotest_common.sh@940 -- # kill -0 139901
00:30:14.382    00:03:45	-- common/autotest_common.sh@941 -- # uname
00:30:14.382   00:03:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:14.382    00:03:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139901
00:30:14.382   00:03:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:14.382   00:03:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:14.382   00:03:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 139901'
00:30:14.382  killing process with pid 139901
00:30:14.382   00:03:45	-- common/autotest_common.sh@955 -- # kill 139901
00:30:14.382   00:03:45	-- common/autotest_common.sh@960 -- # wait 139901
00:30:16.914   00:03:47	-- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:30:16.914   00:03:47	-- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:30:16.914  
00:30:16.914  real	0m6.064s
00:30:16.914  user	0m20.716s
00:30:16.914  sys	0m0.767s
00:30:16.914   00:03:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:16.914   00:03:47	-- common/autotest_common.sh@10 -- # set +x
00:30:16.914  ************************************
00:30:16.914  END TEST bdev_nvme_reset_stuck_adm_cmd
00:30:16.914  ************************************
00:30:16.915   00:03:47	-- nvme/nvme.sh@107 -- # [[ y == y ]]
00:30:16.915   00:03:47	-- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:30:16.915   00:03:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:30:16.915   00:03:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:16.915   00:03:47	-- common/autotest_common.sh@10 -- # set +x
00:30:16.915  ************************************
00:30:16.915  START TEST nvme_fio
00:30:16.915  ************************************
00:30:16.915   00:03:47	-- common/autotest_common.sh@1114 -- # nvme_fio_test
00:30:16.915   00:03:47	-- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:30:16.915   00:03:47	-- nvme/nvme.sh@32 -- # ran_fio=false
00:30:16.915    00:03:47	-- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:30:16.915    00:03:47	-- common/autotest_common.sh@1508 -- # bdfs=()
00:30:16.915    00:03:47	-- common/autotest_common.sh@1508 -- # local bdfs
00:30:16.915    00:03:47	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:30:16.915     00:03:47	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:30:16.915     00:03:47	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:30:16.915    00:03:47	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:30:16.915    00:03:47	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:30:16.915   00:03:47	-- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0')
00:30:16.915   00:03:47	-- nvme/nvme.sh@33 -- # local bdfs bdf
00:30:16.915   00:03:47	-- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:30:16.915   00:03:47	-- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0'
00:30:16.915   00:03:47	-- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:30:16.915   00:03:47	-- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0'
00:30:16.915   00:03:47	-- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:30:16.915   00:03:47	-- nvme/nvme.sh@41 -- # bs=4096
00:30:16.915   00:03:47	-- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:30:16.915   00:03:47	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:30:16.915   00:03:47	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:30:16.915   00:03:47	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:30:16.915   00:03:47	-- common/autotest_common.sh@1328 -- # local sanitizers
00:30:16.915   00:03:47	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:30:16.915   00:03:47	-- common/autotest_common.sh@1330 -- # shift
00:30:16.915   00:03:47	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:30:16.915   00:03:47	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:30:16.915    00:03:47	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:30:16.915    00:03:47	-- common/autotest_common.sh@1334 -- # grep libasan
00:30:16.915    00:03:47	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:30:16.915   00:03:47	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:30:16.915   00:03:47	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:30:16.915   00:03:47	-- common/autotest_common.sh@1336 -- # break
00:30:16.915   00:03:47	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:30:16.915   00:03:47	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:30:17.174  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:30:17.174  fio-3.35
00:30:17.174  Starting 1 thread
00:30:20.463  
00:30:20.463  test: (groupid=0, jobs=1): err= 0: pid=140075: Sat Dec 14 00:03:50 2024
00:30:20.463    read: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(132MiB/2001msec)
00:30:20.463      slat (usec): min=3, max=191, avg= 5.88, stdev= 3.79
00:30:20.463      clat (usec): min=217, max=9349, avg=3767.01, stdev=478.95
00:30:20.463       lat (usec): min=221, max=9445, avg=3772.88, stdev=479.62
00:30:20.463      clat percentiles (usec):
00:30:20.463       |  1.00th=[ 3163],  5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490],
00:30:20.463       | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3687],
00:30:20.463       | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4883],
00:30:20.463       | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6849], 99.95th=[ 8455],
00:30:20.463       | 99.99th=[ 9241]
00:30:20.463     bw (  KiB/s): min=58043, max=71208, per=98.53%, avg=66449.00, stdev=7301.01, samples=3
00:30:20.463     iops        : min=14510, max=17802, avg=16612.00, stdev=1825.68, samples=3
00:30:20.463    write: IOPS=16.9k, BW=66.0MiB/s (69.2MB/s)(132MiB/2001msec); 0 zone resets
00:30:20.463      slat (nsec): min=4005, max=52918, avg=6239.20, stdev=3807.37
00:30:20.463      clat (usec): min=193, max=9229, avg=3785.60, stdev=490.24
00:30:20.463       lat (usec): min=199, max=9264, avg=3791.84, stdev=490.95
00:30:20.463      clat percentiles (usec):
00:30:20.463       |  1.00th=[ 3195],  5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3523],
00:30:20.463       | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720],
00:30:20.463       | 70.00th=[ 3785], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4948],
00:30:20.463       | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7373], 99.95th=[ 8586],
00:30:20.463       | 99.99th=[ 9110]
00:30:20.463     bw (  KiB/s): min=58371, max=71368, per=98.17%, avg=66371.67, stdev=7000.00, samples=3
00:30:20.463     iops        : min=14592, max=17842, avg=16592.67, stdev=1750.43, samples=3
00:30:20.463    lat (usec)   : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01%
00:30:20.463    lat (msec)   : 2=0.06%, 4=81.84%, 10=18.07%
00:30:20.463    cpu          : usr=99.70%, sys=0.15%, ctx=17, majf=0, minf=36
00:30:20.463    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:30:20.463       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:30:20.463       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:30:20.463       issued rwts: total=33737,33822,0,0 short=0,0,0,0 dropped=0,0,0,0
00:30:20.463       latency   : target=0, window=0, percentile=100.00%, depth=128
00:30:20.463  
00:30:20.463  Run status group 0 (all jobs):
00:30:20.463     READ: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=132MiB (138MB), run=2001-2001msec
00:30:20.463    WRITE: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=132MiB (139MB), run=2001-2001msec
00:30:20.722  -----------------------------------------------------
00:30:20.722  Suppressions used:
00:30:20.722    count      bytes template
00:30:20.722        1         32 /usr/src/fio/parse.c
00:30:20.722  -----------------------------------------------------
00:30:20.722  
00:30:20.722   00:03:51	-- nvme/nvme.sh@44 -- # ran_fio=true
00:30:20.722   00:03:51	-- nvme/nvme.sh@46 -- # true
00:30:20.722  
00:30:20.722  real	0m4.211s
00:30:20.722  user	0m3.512s
00:30:20.722  sys	0m0.382s
00:30:20.722   00:03:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:20.722  ************************************
00:30:20.722  END TEST nvme_fio
00:30:20.722  ************************************
00:30:20.722   00:03:51	-- common/autotest_common.sh@10 -- # set +x
00:30:20.722  
00:30:20.722  real	0m47.542s
00:30:20.722  user	2m5.626s
00:30:20.722  sys	0m8.191s
00:30:20.722   00:03:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:20.722  ************************************
00:30:20.722  END TEST nvme
00:30:20.722   00:03:51	-- common/autotest_common.sh@10 -- # set +x
00:30:20.722  ************************************
00:30:20.722   00:03:51	-- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]]
00:30:20.722   00:03:51	-- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:30:20.722   00:03:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:30:20.722   00:03:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:20.722   00:03:51	-- common/autotest_common.sh@10 -- # set +x
00:30:20.722  ************************************
00:30:20.722  START TEST nvme_scc
00:30:20.722  ************************************
00:30:20.722   00:03:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:30:20.982  * Looking for test storage...
00:30:20.982  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:30:20.982     00:03:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:30:20.982      00:03:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:30:20.982      00:03:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:30:20.982     00:03:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:30:20.982     00:03:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:30:20.982     00:03:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:30:20.982     00:03:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:30:20.982     00:03:51	-- scripts/common.sh@335 -- # IFS=.-:
00:30:20.982     00:03:51	-- scripts/common.sh@335 -- # read -ra ver1
00:30:20.982     00:03:51	-- scripts/common.sh@336 -- # IFS=.-:
00:30:20.982     00:03:51	-- scripts/common.sh@336 -- # read -ra ver2
00:30:20.982     00:03:51	-- scripts/common.sh@337 -- # local 'op=<'
00:30:20.982     00:03:51	-- scripts/common.sh@339 -- # ver1_l=2
00:30:20.982     00:03:51	-- scripts/common.sh@340 -- # ver2_l=1
00:30:20.982     00:03:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:30:20.982     00:03:51	-- scripts/common.sh@343 -- # case "$op" in
00:30:20.982     00:03:51	-- scripts/common.sh@344 -- # : 1
00:30:20.982     00:03:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:30:20.982     00:03:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:20.982      00:03:51	-- scripts/common.sh@364 -- # decimal 1
00:30:20.982      00:03:51	-- scripts/common.sh@352 -- # local d=1
00:30:20.982      00:03:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:20.982      00:03:51	-- scripts/common.sh@354 -- # echo 1
00:30:20.982     00:03:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:30:20.982      00:03:51	-- scripts/common.sh@365 -- # decimal 2
00:30:20.982      00:03:51	-- scripts/common.sh@352 -- # local d=2
00:30:20.982      00:03:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:20.982      00:03:51	-- scripts/common.sh@354 -- # echo 2
00:30:20.982     00:03:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:30:20.982     00:03:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:30:20.982     00:03:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:30:20.982     00:03:51	-- scripts/common.sh@367 -- # return 0
00:30:20.982     00:03:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:20.982     00:03:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:30:20.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:20.982  		--rc genhtml_branch_coverage=1
00:30:20.982  		--rc genhtml_function_coverage=1
00:30:20.982  		--rc genhtml_legend=1
00:30:20.982  		--rc geninfo_all_blocks=1
00:30:20.982  		--rc geninfo_unexecuted_blocks=1
00:30:20.982  		
00:30:20.982  		'
00:30:20.982     00:03:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:30:20.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:20.982  		--rc genhtml_branch_coverage=1
00:30:20.982  		--rc genhtml_function_coverage=1
00:30:20.982  		--rc genhtml_legend=1
00:30:20.982  		--rc geninfo_all_blocks=1
00:30:20.982  		--rc geninfo_unexecuted_blocks=1
00:30:20.982  		
00:30:20.982  		'
00:30:20.982     00:03:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:30:20.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:20.982  		--rc genhtml_branch_coverage=1
00:30:20.982  		--rc genhtml_function_coverage=1
00:30:20.982  		--rc genhtml_legend=1
00:30:20.982  		--rc geninfo_all_blocks=1
00:30:20.982  		--rc geninfo_unexecuted_blocks=1
00:30:20.982  		
00:30:20.982  		'
00:30:20.982     00:03:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:30:20.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:20.982  		--rc genhtml_branch_coverage=1
00:30:20.982  		--rc genhtml_function_coverage=1
00:30:20.982  		--rc genhtml_legend=1
00:30:20.982  		--rc geninfo_all_blocks=1
00:30:20.982  		--rc geninfo_unexecuted_blocks=1
00:30:20.982  		
00:30:20.982  		'
00:30:20.982    00:03:51	-- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:30:20.982       00:03:51	-- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:30:20.982      00:03:51	-- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:30:20.982     00:03:51	-- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:30:20.982     00:03:51	-- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:30:20.982      00:03:51	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:20.982      00:03:51	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:20.982      00:03:51	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:20.982       00:03:51	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:30:20.982       00:03:51	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:30:20.982       00:03:51	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:30:20.982       00:03:51	-- paths/export.sh@5 -- # export PATH
00:30:20.982       00:03:51	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:30:20.982     00:03:51	-- nvme/functions.sh@10 -- # ctrls=()
00:30:20.982     00:03:51	-- nvme/functions.sh@10 -- # declare -A ctrls
00:30:20.982     00:03:51	-- nvme/functions.sh@11 -- # nvmes=()
00:30:20.982     00:03:51	-- nvme/functions.sh@11 -- # declare -A nvmes
00:30:20.982     00:03:51	-- nvme/functions.sh@12 -- # bdfs=()
00:30:20.982     00:03:51	-- nvme/functions.sh@12 -- # declare -A bdfs
00:30:20.982     00:03:51	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:30:20.982     00:03:51	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:30:20.982     00:03:51	-- nvme/functions.sh@14 -- # nvme_name=
00:30:20.982    00:03:51	-- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:20.982    00:03:51	-- nvme/nvme_scc.sh@12 -- # uname
00:30:20.982   00:03:51	-- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:30:20.982   00:03:51	-- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:30:20.982   00:03:51	-- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:30:21.241  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:30:21.241  Waiting for block devices as requested
00:30:21.502  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:30:21.502   00:03:52	-- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:30:21.502   00:03:52	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:30:21.502   00:03:52	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:30:21.502   00:03:52	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:30:21.502   00:03:52	-- nvme/functions.sh@49 -- # pci=0000:00:06.0
00:30:21.503   00:03:52	-- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0
00:30:21.503   00:03:52	-- scripts/common.sh@15 -- # local i
00:30:21.503   00:03:52	-- scripts/common.sh@18 -- # [[    =~  0000:00:06.0  ]]
00:30:21.503   00:03:52	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:30:21.503   00:03:52	-- scripts/common.sh@24 -- # return 0
00:30:21.503   00:03:52	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:30:21.503   00:03:52	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:30:21.503   00:03:52	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@18 -- # shift
00:30:21.503   00:03:52	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503    00:03:52	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340               "'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[sn]='12340               '
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rab]=6
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mec]=0
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.503   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.503   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:30:21.503   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:30:21.503    00:03:52	-- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.504   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.504   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:30:21.504   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:30:21.504    00:03:52	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[nn]=256
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fna]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.505   00:03:52	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:30:21.505   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:30:21.505    00:03:52	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:30:21.505   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:30:21.506   00:03:52	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:30:21.506   00:03:52	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:30:21.506   00:03:52	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:30:21.506   00:03:52	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@18 -- # shift
00:30:21.506   00:03:52	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506    00:03:52	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.506   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.506   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:30:21.506    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:30:21.506   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:30:21.507   00:03:52	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:30:21.507    00:03:52	-- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # IFS=:
00:30:21.507   00:03:52	-- nvme/functions.sh@21 -- # read -r reg val
00:30:21.507   00:03:52	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:30:21.507   00:03:52	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:30:21.507   00:03:52	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:30:21.507   00:03:52	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0
00:30:21.507   00:03:52	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:30:21.507   00:03:52	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:30:21.507    00:03:52	-- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:30:21.507    00:03:52	-- nvme/functions.sh@202 -- # local _ctrls feature=scc
00:30:21.507    00:03:52	-- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:30:21.507     00:03:52	-- nvme/functions.sh@204 -- # get_ctrls_with_feature scc
00:30:21.507     00:03:52	-- nvme/functions.sh@190 -- # (( 1 == 0 ))
00:30:21.507     00:03:52	-- nvme/functions.sh@192 -- # local ctrl feature=scc
00:30:21.507      00:03:52	-- nvme/functions.sh@194 -- # type -t ctrl_has_scc
00:30:21.507     00:03:52	-- nvme/functions.sh@194 -- # [[ function == function ]]
00:30:21.507     00:03:52	-- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}"
00:30:21.507     00:03:52	-- nvme/functions.sh@197 -- # ctrl_has_scc nvme0
00:30:21.507     00:03:52	-- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs
00:30:21.507      00:03:52	-- nvme/functions.sh@184 -- # get_oncs nvme0
00:30:21.507      00:03:52	-- nvme/functions.sh@169 -- # local ctrl=nvme0
00:30:21.507      00:03:52	-- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs
00:30:21.507      00:03:52	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:30:21.507      00:03:52	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:30:21.507      00:03:52	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:30:21.507      00:03:52	-- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:30:21.507      00:03:52	-- nvme/functions.sh@76 -- # echo 0x15d
00:30:21.507     00:03:52	-- nvme/functions.sh@184 -- # oncs=0x15d
00:30:21.507     00:03:52	-- nvme/functions.sh@186 -- # (( oncs & 1 << 8 ))
00:30:21.507     00:03:52	-- nvme/functions.sh@197 -- # echo nvme0
00:30:21.507    00:03:52	-- nvme/functions.sh@205 -- # (( 1 > 0 ))
00:30:21.507    00:03:52	-- nvme/functions.sh@206 -- # echo nvme0
00:30:21.507    00:03:52	-- nvme/functions.sh@207 -- # return 0
00:30:21.507   00:03:52	-- nvme/nvme_scc.sh@17 -- # ctrl=nvme0
00:30:21.507   00:03:52	-- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0
00:30:21.507   00:03:52	-- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:30:22.075  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:30:22.075  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:30:23.453   00:03:54	-- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0'
00:30:23.453   00:03:54	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:30:23.453   00:03:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:23.453   00:03:54	-- common/autotest_common.sh@10 -- # set +x
00:30:23.453  ************************************
00:30:23.453  START TEST nvme_simple_copy
00:30:23.453  ************************************
00:30:23.453   00:03:54	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0'
00:30:23.711  Initializing NVMe Controllers
00:30:23.711  Attaching to 0000:00:06.0
00:30:23.711  Controller supports SCC. Attached to 0000:00:06.0
00:30:23.711    Namespace ID: 1 size: 5GB
00:30:23.711  Initialization complete.
00:30:23.711  
00:30:23.711  Controller QEMU NVMe Ctrl       (12340               )
00:30:23.711  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:30:23.711  Namespace Block Size:4096
00:30:23.711  Writing LBAs 0 to 63 with Random Data
00:30:23.711  Copied LBAs from 0 - 63 to the Destination LBA 256
00:30:23.711  LBAs matching Written Data: 64
00:30:23.970  
00:30:23.970  real	0m0.325s
00:30:23.970  user	0m0.139s
00:30:23.970  sys	0m0.084s
00:30:23.970   00:03:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:23.970   00:03:54	-- common/autotest_common.sh@10 -- # set +x
00:30:23.970  ************************************
00:30:23.970  END TEST nvme_simple_copy
00:30:23.970  ************************************
00:30:23.970  
00:30:23.970  real	0m3.103s
00:30:23.970  user	0m0.899s
00:30:23.970  sys	0m2.107s
00:30:23.970   00:03:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:23.970   00:03:54	-- common/autotest_common.sh@10 -- # set +x
00:30:23.970  ************************************
00:30:23.970  END TEST nvme_scc
00:30:23.970  ************************************
00:30:23.970   00:03:54	-- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]]
00:30:23.970   00:03:54	-- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:30:23.970   00:03:54	-- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]]
00:30:23.970   00:03:54	-- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]]
00:30:23.970   00:03:54	-- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]]
00:30:23.970   00:03:54	-- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:30:23.970   00:03:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:30:23.970   00:03:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:23.970   00:03:54	-- common/autotest_common.sh@10 -- # set +x
00:30:23.970  ************************************
00:30:23.970  START TEST nvme_rpc
00:30:23.970  ************************************
00:30:23.970   00:03:54	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:30:23.970  * Looking for test storage...
00:30:23.970  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:30:23.970    00:03:54	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:30:23.970     00:03:54	-- common/autotest_common.sh@1690 -- # lcov --version
00:30:23.970     00:03:54	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:30:24.229    00:03:54	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:30:24.229    00:03:54	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:30:24.229    00:03:54	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:30:24.229    00:03:54	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:30:24.229    00:03:54	-- scripts/common.sh@335 -- # IFS=.-:
00:30:24.229    00:03:54	-- scripts/common.sh@335 -- # read -ra ver1
00:30:24.229    00:03:54	-- scripts/common.sh@336 -- # IFS=.-:
00:30:24.229    00:03:54	-- scripts/common.sh@336 -- # read -ra ver2
00:30:24.229    00:03:54	-- scripts/common.sh@337 -- # local 'op=<'
00:30:24.230    00:03:54	-- scripts/common.sh@339 -- # ver1_l=2
00:30:24.230    00:03:54	-- scripts/common.sh@340 -- # ver2_l=1
00:30:24.230    00:03:54	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:30:24.230    00:03:54	-- scripts/common.sh@343 -- # case "$op" in
00:30:24.230    00:03:54	-- scripts/common.sh@344 -- # : 1
00:30:24.230    00:03:54	-- scripts/common.sh@363 -- # (( v = 0 ))
00:30:24.230    00:03:54	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:24.230     00:03:54	-- scripts/common.sh@364 -- # decimal 1
00:30:24.230     00:03:54	-- scripts/common.sh@352 -- # local d=1
00:30:24.230     00:03:54	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:24.230     00:03:54	-- scripts/common.sh@354 -- # echo 1
00:30:24.230    00:03:54	-- scripts/common.sh@364 -- # ver1[v]=1
00:30:24.230     00:03:54	-- scripts/common.sh@365 -- # decimal 2
00:30:24.230     00:03:54	-- scripts/common.sh@352 -- # local d=2
00:30:24.230     00:03:54	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:24.230     00:03:54	-- scripts/common.sh@354 -- # echo 2
00:30:24.230    00:03:54	-- scripts/common.sh@365 -- # ver2[v]=2
00:30:24.230    00:03:54	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:30:24.230    00:03:54	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:30:24.230    00:03:54	-- scripts/common.sh@367 -- # return 0
00:30:24.230    00:03:54	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:24.230    00:03:54	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:30:24.230  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:24.230  		--rc genhtml_branch_coverage=1
00:30:24.230  		--rc genhtml_function_coverage=1
00:30:24.230  		--rc genhtml_legend=1
00:30:24.230  		--rc geninfo_all_blocks=1
00:30:24.230  		--rc geninfo_unexecuted_blocks=1
00:30:24.230  		
00:30:24.230  		'
00:30:24.230    00:03:54	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:30:24.230  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:24.230  		--rc genhtml_branch_coverage=1
00:30:24.230  		--rc genhtml_function_coverage=1
00:30:24.230  		--rc genhtml_legend=1
00:30:24.230  		--rc geninfo_all_blocks=1
00:30:24.230  		--rc geninfo_unexecuted_blocks=1
00:30:24.230  		
00:30:24.230  		'
00:30:24.230    00:03:54	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:30:24.230  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:24.230  		--rc genhtml_branch_coverage=1
00:30:24.230  		--rc genhtml_function_coverage=1
00:30:24.230  		--rc genhtml_legend=1
00:30:24.230  		--rc geninfo_all_blocks=1
00:30:24.230  		--rc geninfo_unexecuted_blocks=1
00:30:24.230  		
00:30:24.230  		'
00:30:24.230    00:03:54	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:30:24.230  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:24.230  		--rc genhtml_branch_coverage=1
00:30:24.230  		--rc genhtml_function_coverage=1
00:30:24.230  		--rc genhtml_legend=1
00:30:24.230  		--rc geninfo_all_blocks=1
00:30:24.230  		--rc geninfo_unexecuted_blocks=1
00:30:24.230  		
00:30:24.230  		'
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:24.230    00:03:54	-- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:30:24.230    00:03:54	-- common/autotest_common.sh@1519 -- # bdfs=()
00:30:24.230    00:03:54	-- common/autotest_common.sh@1519 -- # local bdfs
00:30:24.230    00:03:54	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:30:24.230     00:03:54	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:30:24.230     00:03:54	-- common/autotest_common.sh@1508 -- # bdfs=()
00:30:24.230     00:03:54	-- common/autotest_common.sh@1508 -- # local bdfs
00:30:24.230     00:03:54	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:30:24.230      00:03:54	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:30:24.230      00:03:54	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:30:24.230     00:03:54	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:30:24.230     00:03:54	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:30:24.230    00:03:54	-- common/autotest_common.sh@1522 -- # echo 0000:00:06.0
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=140582
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:30:24.230   00:03:54	-- nvme/nvme_rpc.sh@19 -- # waitforlisten 140582
00:30:24.230   00:03:54	-- common/autotest_common.sh@829 -- # '[' -z 140582 ']'
00:30:24.230   00:03:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:24.230   00:03:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:24.230   00:03:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:24.230  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:24.230   00:03:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:24.230   00:03:54	-- common/autotest_common.sh@10 -- # set +x
00:30:24.230  [2024-12-14 00:03:54.894175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:24.230  [2024-12-14 00:03:54.894376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140582 ]
00:30:24.496  [2024-12-14 00:03:55.075210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:30:24.755  [2024-12-14 00:03:55.318332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:30:24.756  [2024-12-14 00:03:55.318719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:30:24.756  [2024-12-14 00:03:55.318731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:26.131   00:03:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:26.131   00:03:56	-- common/autotest_common.sh@862 -- # return 0
00:30:26.131   00:03:56	-- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0
00:30:26.131  Nvme0n1
00:30:26.131   00:03:56	-- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:30:26.131   00:03:56	-- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:30:26.390  request:
00:30:26.390  {
00:30:26.390    "filename": "non_existing_file",
00:30:26.390    "bdev_name": "Nvme0n1",
00:30:26.390    "method": "bdev_nvme_apply_firmware",
00:30:26.390    "req_id": 1
00:30:26.390  }
00:30:26.390  Got JSON-RPC error response
00:30:26.390  response:
00:30:26.390  {
00:30:26.390    "code": -32603,
00:30:26.390    "message": "open file failed."
00:30:26.390  }
00:30:26.390   00:03:56	-- nvme/nvme_rpc.sh@32 -- # rv=1
00:30:26.390   00:03:56	-- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:30:26.390   00:03:56	-- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:30:26.648   00:03:57	-- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:30:26.648   00:03:57	-- nvme/nvme_rpc.sh@40 -- # killprocess 140582
00:30:26.648   00:03:57	-- common/autotest_common.sh@936 -- # '[' -z 140582 ']'
00:30:26.648   00:03:57	-- common/autotest_common.sh@940 -- # kill -0 140582
00:30:26.648    00:03:57	-- common/autotest_common.sh@941 -- # uname
00:30:26.648   00:03:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:26.648    00:03:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140582
00:30:26.648   00:03:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:26.648   00:03:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:26.648  killing process with pid 140582
00:30:26.648   00:03:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140582'
00:30:26.648   00:03:57	-- common/autotest_common.sh@955 -- # kill 140582
00:30:26.648   00:03:57	-- common/autotest_common.sh@960 -- # wait 140582
00:30:28.550  
00:30:28.550  real	0m4.464s
00:30:28.550  user	0m8.255s
00:30:28.550  sys	0m0.743s
00:30:28.550   00:03:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:28.550   00:03:59	-- common/autotest_common.sh@10 -- # set +x
00:30:28.550  ************************************
00:30:28.550  END TEST nvme_rpc
00:30:28.550  ************************************
00:30:28.550   00:03:59	-- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:30:28.550   00:03:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:30:28.550   00:03:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:28.550   00:03:59	-- common/autotest_common.sh@10 -- # set +x
00:30:28.550  ************************************
00:30:28.550  START TEST nvme_rpc_timeouts
00:30:28.550  ************************************
00:30:28.550   00:03:59	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:30:28.550  * Looking for test storage...
00:30:28.550  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:30:28.550    00:03:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:30:28.550     00:03:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:30:28.550     00:03:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:30:28.550    00:03:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:30:28.550    00:03:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:30:28.550    00:03:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:30:28.550    00:03:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:30:28.550    00:03:59	-- scripts/common.sh@335 -- # IFS=.-:
00:30:28.550    00:03:59	-- scripts/common.sh@335 -- # read -ra ver1
00:30:28.550    00:03:59	-- scripts/common.sh@336 -- # IFS=.-:
00:30:28.550    00:03:59	-- scripts/common.sh@336 -- # read -ra ver2
00:30:28.550    00:03:59	-- scripts/common.sh@337 -- # local 'op=<'
00:30:28.550    00:03:59	-- scripts/common.sh@339 -- # ver1_l=2
00:30:28.550    00:03:59	-- scripts/common.sh@340 -- # ver2_l=1
00:30:28.550    00:03:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:30:28.550    00:03:59	-- scripts/common.sh@343 -- # case "$op" in
00:30:28.550    00:03:59	-- scripts/common.sh@344 -- # : 1
00:30:28.550    00:03:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:30:28.550    00:03:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:28.550     00:03:59	-- scripts/common.sh@364 -- # decimal 1
00:30:28.550     00:03:59	-- scripts/common.sh@352 -- # local d=1
00:30:28.550     00:03:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:28.550     00:03:59	-- scripts/common.sh@354 -- # echo 1
00:30:28.550    00:03:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:30:28.550     00:03:59	-- scripts/common.sh@365 -- # decimal 2
00:30:28.550     00:03:59	-- scripts/common.sh@352 -- # local d=2
00:30:28.551     00:03:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:28.551     00:03:59	-- scripts/common.sh@354 -- # echo 2
00:30:28.551    00:03:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:30:28.551    00:03:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:30:28.551    00:03:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:30:28.551    00:03:59	-- scripts/common.sh@367 -- # return 0
00:30:28.551    00:03:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:28.551    00:03:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:30:28.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:28.551  		--rc genhtml_branch_coverage=1
00:30:28.551  		--rc genhtml_function_coverage=1
00:30:28.551  		--rc genhtml_legend=1
00:30:28.551  		--rc geninfo_all_blocks=1
00:30:28.551  		--rc geninfo_unexecuted_blocks=1
00:30:28.551  		
00:30:28.551  		'
00:30:28.551    00:03:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:30:28.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:28.551  		--rc genhtml_branch_coverage=1
00:30:28.551  		--rc genhtml_function_coverage=1
00:30:28.551  		--rc genhtml_legend=1
00:30:28.551  		--rc geninfo_all_blocks=1
00:30:28.551  		--rc geninfo_unexecuted_blocks=1
00:30:28.551  		
00:30:28.551  		'
00:30:28.551    00:03:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:30:28.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:28.551  		--rc genhtml_branch_coverage=1
00:30:28.551  		--rc genhtml_function_coverage=1
00:30:28.551  		--rc genhtml_legend=1
00:30:28.551  		--rc geninfo_all_blocks=1
00:30:28.551  		--rc geninfo_unexecuted_blocks=1
00:30:28.551  		
00:30:28.551  		'
00:30:28.551    00:03:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:30:28.551  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:28.551  		--rc genhtml_branch_coverage=1
00:30:28.551  		--rc genhtml_function_coverage=1
00:30:28.551  		--rc genhtml_legend=1
00:30:28.551  		--rc geninfo_all_blocks=1
00:30:28.551  		--rc geninfo_unexecuted_blocks=1
00:30:28.551  		
00:30:28.551  		'
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_140664
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_140664
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=140696
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:30:28.551   00:03:59	-- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 140696
00:30:28.551   00:03:59	-- common/autotest_common.sh@829 -- # '[' -z 140696 ']'
00:30:28.551   00:03:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:28.551   00:03:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:28.551   00:03:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:28.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:28.551   00:03:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:28.551   00:03:59	-- common/autotest_common.sh@10 -- # set +x
00:30:28.810  [2024-12-14 00:03:59.327440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:28.810  [2024-12-14 00:03:59.327890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140696 ]
00:30:28.810  [2024-12-14 00:03:59.498270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:30:29.069  [2024-12-14 00:03:59.678332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:30:29.069  [2024-12-14 00:03:59.678969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:30:29.069  [2024-12-14 00:03:59.678981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:30.450  Checking default timeout settings:
00:30:30.450   00:04:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:30.450   00:04:00	-- common/autotest_common.sh@862 -- # return 0
00:30:30.450   00:04:00	-- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:30:30.450   00:04:00	-- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:30:30.717  Making settings changes with rpc:
00:30:30.717   00:04:01	-- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:30:30.717   00:04:01	-- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:30:30.976  Check default vs. modified settings:
00:30:30.976   00:04:01	-- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:30:30.976   00:04:01	-- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_140664
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_140664
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.234  Setting action_on_timeout is changed as expected.
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:30:31.234   00:04:01	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_140664
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.234    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_140664
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.235  Setting timeout_us is changed as expected.
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_140664
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_140664
00:30:31.235    00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:30:31.235  Setting timeout_admin_us is changed as expected.
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_140664 /tmp/settings_modified_140664
00:30:31.235   00:04:01	-- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 140696
00:30:31.235   00:04:01	-- common/autotest_common.sh@936 -- # '[' -z 140696 ']'
00:30:31.235   00:04:01	-- common/autotest_common.sh@940 -- # kill -0 140696
00:30:31.235    00:04:01	-- common/autotest_common.sh@941 -- # uname
00:30:31.235   00:04:01	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:31.235    00:04:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140696
00:30:31.235  killing process with pid 140696
00:30:31.235   00:04:01	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:31.235   00:04:01	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:31.235   00:04:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140696'
00:30:31.235   00:04:01	-- common/autotest_common.sh@955 -- # kill 140696
00:30:31.235   00:04:01	-- common/autotest_common.sh@960 -- # wait 140696
00:30:33.135  RPC TIMEOUT SETTING TEST PASSED.
00:30:33.135   00:04:03	-- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:30:33.135  ************************************
00:30:33.135  END TEST nvme_rpc_timeouts
00:30:33.135  ************************************
00:30:33.135  
00:30:33.135  real	0m4.742s
00:30:33.135  user	0m9.180s
00:30:33.135  sys	0m0.702s
00:30:33.135   00:04:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:33.135   00:04:03	-- common/autotest_common.sh@10 -- # set +x
00:30:33.135   00:04:03	-- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']'
00:30:33.135   00:04:03	-- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]]
00:30:33.135   00:04:03	-- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']'
00:30:33.135   00:04:03	-- spdk/autotest.sh@255 -- # timing_exit lib
00:30:33.135   00:04:03	-- common/autotest_common.sh@728 -- # xtrace_disable
00:30:33.135   00:04:03	-- common/autotest_common.sh@10 -- # set +x
00:30:33.394   00:04:03	-- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:30:33.394   00:04:03	-- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]]
00:30:33.394   00:04:03	-- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]]
00:30:33.394   00:04:03	-- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]]
00:30:33.394   00:04:03	-- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]]
00:30:33.394   00:04:03	-- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:30:33.394   00:04:03	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:30:33.394   00:04:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:33.394   00:04:03	-- common/autotest_common.sh@10 -- # set +x
00:30:33.394  ************************************
00:30:33.394  START TEST blockdev_raid5f
00:30:33.394  ************************************
00:30:33.394   00:04:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:30:33.394  * Looking for test storage...
00:30:33.394  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:30:33.394    00:04:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:30:33.394     00:04:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:30:33.394     00:04:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:30:33.394    00:04:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:30:33.394    00:04:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:30:33.394    00:04:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:30:33.394    00:04:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:30:33.394    00:04:04	-- scripts/common.sh@335 -- # IFS=.-:
00:30:33.394    00:04:04	-- scripts/common.sh@335 -- # read -ra ver1
00:30:33.394    00:04:04	-- scripts/common.sh@336 -- # IFS=.-:
00:30:33.394    00:04:04	-- scripts/common.sh@336 -- # read -ra ver2
00:30:33.394    00:04:04	-- scripts/common.sh@337 -- # local 'op=<'
00:30:33.395    00:04:04	-- scripts/common.sh@339 -- # ver1_l=2
00:30:33.395    00:04:04	-- scripts/common.sh@340 -- # ver2_l=1
00:30:33.395    00:04:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:30:33.395    00:04:04	-- scripts/common.sh@343 -- # case "$op" in
00:30:33.395    00:04:04	-- scripts/common.sh@344 -- # : 1
00:30:33.395    00:04:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:30:33.395    00:04:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:33.395     00:04:04	-- scripts/common.sh@364 -- # decimal 1
00:30:33.395     00:04:04	-- scripts/common.sh@352 -- # local d=1
00:30:33.395     00:04:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:33.395     00:04:04	-- scripts/common.sh@354 -- # echo 1
00:30:33.395    00:04:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:30:33.395     00:04:04	-- scripts/common.sh@365 -- # decimal 2
00:30:33.395     00:04:04	-- scripts/common.sh@352 -- # local d=2
00:30:33.395     00:04:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:33.395     00:04:04	-- scripts/common.sh@354 -- # echo 2
00:30:33.395    00:04:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:30:33.395    00:04:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:30:33.395    00:04:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:30:33.395    00:04:04	-- scripts/common.sh@367 -- # return 0
00:30:33.395    00:04:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:33.395    00:04:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:30:33.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:33.395  		--rc genhtml_branch_coverage=1
00:30:33.395  		--rc genhtml_function_coverage=1
00:30:33.395  		--rc genhtml_legend=1
00:30:33.395  		--rc geninfo_all_blocks=1
00:30:33.395  		--rc geninfo_unexecuted_blocks=1
00:30:33.395  		
00:30:33.395  		'
00:30:33.395    00:04:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:30:33.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:33.395  		--rc genhtml_branch_coverage=1
00:30:33.395  		--rc genhtml_function_coverage=1
00:30:33.395  		--rc genhtml_legend=1
00:30:33.395  		--rc geninfo_all_blocks=1
00:30:33.395  		--rc geninfo_unexecuted_blocks=1
00:30:33.395  		
00:30:33.395  		'
00:30:33.395    00:04:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:30:33.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:33.395  		--rc genhtml_branch_coverage=1
00:30:33.395  		--rc genhtml_function_coverage=1
00:30:33.395  		--rc genhtml_legend=1
00:30:33.395  		--rc geninfo_all_blocks=1
00:30:33.395  		--rc geninfo_unexecuted_blocks=1
00:30:33.395  		
00:30:33.395  		'
00:30:33.395    00:04:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:30:33.395  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:33.395  		--rc genhtml_branch_coverage=1
00:30:33.395  		--rc genhtml_function_coverage=1
00:30:33.395  		--rc genhtml_legend=1
00:30:33.395  		--rc geninfo_all_blocks=1
00:30:33.395  		--rc geninfo_unexecuted_blocks=1
00:30:33.395  		
00:30:33.395  		'
00:30:33.395   00:04:04	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:30:33.395    00:04:04	-- bdev/nbd_common.sh@6 -- # set -e
00:30:33.395   00:04:04	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:30:33.395   00:04:04	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:30:33.395   00:04:04	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:30:33.395   00:04:04	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:30:33.395   00:04:04	-- bdev/blockdev.sh@18 -- # :
00:30:33.395   00:04:04	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:30:33.395   00:04:04	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:30:33.395   00:04:04	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:30:33.395    00:04:04	-- bdev/blockdev.sh@672 -- # uname -s
00:30:33.395   00:04:04	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:30:33.395   00:04:04	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:30:33.395   00:04:04	-- bdev/blockdev.sh@680 -- # test_type=raid5f
00:30:33.395   00:04:04	-- bdev/blockdev.sh@681 -- # crypto_device=
00:30:33.395   00:04:04	-- bdev/blockdev.sh@682 -- # dek=
00:30:33.395   00:04:04	-- bdev/blockdev.sh@683 -- # env_ctx=
00:30:33.395   00:04:04	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:30:33.395   00:04:04	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:30:33.395   00:04:04	-- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]]
00:30:33.395   00:04:04	-- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]]
00:30:33.395   00:04:04	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:30:33.395   00:04:04	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140869
00:30:33.395   00:04:04	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:30:33.395   00:04:04	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:30:33.395   00:04:04	-- bdev/blockdev.sh@47 -- # waitforlisten 140869
00:30:33.395   00:04:04	-- common/autotest_common.sh@829 -- # '[' -z 140869 ']'
00:30:33.395   00:04:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:33.395   00:04:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:33.395   00:04:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:33.395  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:33.395   00:04:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:33.395   00:04:04	-- common/autotest_common.sh@10 -- # set +x
00:30:33.654  [2024-12-14 00:04:04.164665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:33.654  [2024-12-14 00:04:04.165147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140869 ]
00:30:33.654  [2024-12-14 00:04:04.332639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:33.913  [2024-12-14 00:04:04.513489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:30:33.913  [2024-12-14 00:04:04.514046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:35.310   00:04:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:35.310   00:04:05	-- common/autotest_common.sh@862 -- # return 0
00:30:35.310   00:04:05	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:30:35.310   00:04:05	-- bdev/blockdev.sh@724 -- # setup_raid5f_conf
00:30:35.310   00:04:05	-- bdev/blockdev.sh@278 -- # rpc_cmd
00:30:35.310   00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310   00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310  Malloc0
00:30:35.310  Malloc1
00:30:35.310  Malloc2
00:30:35.310   00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310   00:04:05	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:30:35.310   00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310   00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310   00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310   00:04:05	-- bdev/blockdev.sh@738 -- # cat
00:30:35.310    00:04:05	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:30:35.310    00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310    00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310    00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310    00:04:05	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:30:35.310    00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310    00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310    00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310    00:04:05	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:30:35.310    00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310    00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310    00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310   00:04:05	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:30:35.310    00:04:05	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:30:35.310    00:04:05	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:30:35.310    00:04:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:30:35.310    00:04:05	-- common/autotest_common.sh@10 -- # set +x
00:30:35.310    00:04:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:30:35.310   00:04:05	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:30:35.311    00:04:05	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "20f9601f-bcc7-488f-b511-14a34bd98312"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "20f9601f-bcc7-488f-b511-14a34bd98312",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "20f9601f-bcc7-488f-b511-14a34bd98312",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "0f140044-89ed-42af-84ba-32e3bdf8138a",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "2398c269-5f3b-49dc-a9dd-8cd60f346fa3",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "b49e2b36-e45d-4849-a954-ed0048e25194",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:30:35.311    00:04:05	-- bdev/blockdev.sh@747 -- # jq -r .name
00:30:35.311   00:04:05	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:30:35.311   00:04:05	-- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f
00:30:35.311   00:04:05	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:30:35.311   00:04:05	-- bdev/blockdev.sh@752 -- # killprocess 140869
00:30:35.311   00:04:05	-- common/autotest_common.sh@936 -- # '[' -z 140869 ']'
00:30:35.311   00:04:05	-- common/autotest_common.sh@940 -- # kill -0 140869
00:30:35.311    00:04:05	-- common/autotest_common.sh@941 -- # uname
00:30:35.311   00:04:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:35.311    00:04:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140869
00:30:35.311   00:04:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:35.311   00:04:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:35.311   00:04:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140869'
00:30:35.311  killing process with pid 140869
00:30:35.311   00:04:06	-- common/autotest_common.sh@955 -- # kill 140869
00:30:35.311   00:04:06	-- common/autotest_common.sh@960 -- # wait 140869
00:30:37.855   00:04:08	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:30:37.855   00:04:08	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:30:37.855   00:04:08	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:30:37.855   00:04:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:37.855   00:04:08	-- common/autotest_common.sh@10 -- # set +x
00:30:37.855  ************************************
00:30:37.855  START TEST bdev_hello_world
00:30:37.855  ************************************
00:30:37.855   00:04:08	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:30:37.855  [2024-12-14 00:04:08.233303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:37.855  [2024-12-14 00:04:08.233509] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140941 ]
00:30:37.855  [2024-12-14 00:04:08.402502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:38.114  [2024-12-14 00:04:08.590020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:38.372  [2024-12-14 00:04:09.071905] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:30:38.372  [2024-12-14 00:04:09.072006] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f
00:30:38.373  [2024-12-14 00:04:09.072050] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:30:38.373  [2024-12-14 00:04:09.072563] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:30:38.373  [2024-12-14 00:04:09.072719] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:30:38.373  [2024-12-14 00:04:09.072760] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:30:38.373  [2024-12-14 00:04:09.072848] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:30:38.373  
00:30:38.373  [2024-12-14 00:04:09.072900] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:30:39.750  ************************************
00:30:39.750  END TEST bdev_hello_world
00:30:39.750  ************************************
00:30:39.750  
00:30:39.750  real	0m2.114s
00:30:39.750  user	0m1.660s
00:30:39.750  sys	0m0.340s
00:30:39.750   00:04:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:39.750   00:04:10	-- common/autotest_common.sh@10 -- # set +x
00:30:39.750   00:04:10	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:30:39.750   00:04:10	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:30:39.750   00:04:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:39.750   00:04:10	-- common/autotest_common.sh@10 -- # set +x
00:30:39.750  ************************************
00:30:39.750  START TEST bdev_bounds
00:30:39.750  ************************************
00:30:39.750   00:04:10	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:30:39.750   00:04:10	-- bdev/blockdev.sh@288 -- # bdevio_pid=140991
00:30:39.750   00:04:10	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:30:39.750   00:04:10	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:30:39.750  Process bdevio pid: 140991
00:30:39.750   00:04:10	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 140991'
00:30:39.750   00:04:10	-- bdev/blockdev.sh@291 -- # waitforlisten 140991
00:30:39.750   00:04:10	-- common/autotest_common.sh@829 -- # '[' -z 140991 ']'
00:30:39.750   00:04:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:39.750   00:04:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:39.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:39.750   00:04:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:39.750   00:04:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:39.750   00:04:10	-- common/autotest_common.sh@10 -- # set +x
00:30:39.750  [2024-12-14 00:04:10.401098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:39.750  [2024-12-14 00:04:10.401306] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140991 ]
00:30:40.008  [2024-12-14 00:04:10.579837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:30:40.267  [2024-12-14 00:04:10.761357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:30:40.267  [2024-12-14 00:04:10.761542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:30:40.267  [2024-12-14 00:04:10.761575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:40.835   00:04:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:40.835   00:04:11	-- common/autotest_common.sh@862 -- # return 0
00:30:40.835   00:04:11	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:30:40.835  I/O targets:
00:30:40.835    raid5f: 131072 blocks of 512 bytes (64 MiB)
00:30:40.835  
00:30:40.835  
00:30:40.835       CUnit - A unit testing framework for C - Version 2.1-3
00:30:40.835       http://cunit.sourceforge.net/
00:30:40.835  
00:30:40.835  
00:30:40.835  Suite: bdevio tests on: raid5f
00:30:40.835    Test: blockdev write read block ...passed
00:30:40.835    Test: blockdev write zeroes read block ...passed
00:30:40.835    Test: blockdev write zeroes read no split ...passed
00:30:40.835    Test: blockdev write zeroes read split ...passed
00:30:41.094    Test: blockdev write zeroes read split partial ...passed
00:30:41.094    Test: blockdev reset ...passed
00:30:41.094    Test: blockdev write read 8 blocks ...passed
00:30:41.094    Test: blockdev write read size > 128k ...passed
00:30:41.094    Test: blockdev write read invalid size ...passed
00:30:41.094    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:30:41.094    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:30:41.094    Test: blockdev write read max offset ...passed
00:30:41.094    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:30:41.094    Test: blockdev writev readv 8 blocks ...passed
00:30:41.094    Test: blockdev writev readv 30 x 1block ...passed
00:30:41.094    Test: blockdev writev readv block ...passed
00:30:41.094    Test: blockdev writev readv size > 128k ...passed
00:30:41.094    Test: blockdev writev readv size > 128k in two iovs ...passed
00:30:41.094    Test: blockdev comparev and writev ...passed
00:30:41.094    Test: blockdev nvme passthru rw ...passed
00:30:41.094    Test: blockdev nvme passthru vendor specific ...passed
00:30:41.094    Test: blockdev nvme admin passthru ...passed
00:30:41.094    Test: blockdev copy ...passed
00:30:41.094  
00:30:41.094  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:30:41.094                suites      1      1    n/a      0        0
00:30:41.094                 tests     23     23     23      0        0
00:30:41.094               asserts    130    130    130      0      n/a
00:30:41.094  
00:30:41.094  Elapsed time =    0.458 seconds
00:30:41.094  0
00:30:41.094   00:04:11	-- bdev/blockdev.sh@293 -- # killprocess 140991
00:30:41.094   00:04:11	-- common/autotest_common.sh@936 -- # '[' -z 140991 ']'
00:30:41.094   00:04:11	-- common/autotest_common.sh@940 -- # kill -0 140991
00:30:41.094    00:04:11	-- common/autotest_common.sh@941 -- # uname
00:30:41.094   00:04:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:41.094    00:04:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140991
00:30:41.094   00:04:11	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:41.094   00:04:11	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:41.094   00:04:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140991'
00:30:41.094  killing process with pid 140991
00:30:41.094   00:04:11	-- common/autotest_common.sh@955 -- # kill 140991
00:30:41.094   00:04:11	-- common/autotest_common.sh@960 -- # wait 140991
00:30:42.470   00:04:12	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:30:42.470  
00:30:42.470  real	0m2.500s
00:30:42.470  user	0m5.799s
00:30:42.470  sys	0m0.425s
00:30:42.470   00:04:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:42.470   00:04:12	-- common/autotest_common.sh@10 -- # set +x
00:30:42.470  ************************************
00:30:42.470  END TEST bdev_bounds
00:30:42.470  ************************************
00:30:42.470   00:04:12	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:30:42.470   00:04:12	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:30:42.470   00:04:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:42.470   00:04:12	-- common/autotest_common.sh@10 -- # set +x
00:30:42.470  ************************************
00:30:42.470  START TEST bdev_nbd
00:30:42.470  ************************************
00:30:42.470   00:04:12	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:30:42.470    00:04:12	-- bdev/blockdev.sh@298 -- # uname -s
00:30:42.470   00:04:12	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:30:42.470   00:04:12	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:42.470   00:04:12	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:30:42.470   00:04:12	-- bdev/blockdev.sh@302 -- # bdev_all=('raid5f')
00:30:42.470   00:04:12	-- bdev/blockdev.sh@302 -- # local bdev_all
00:30:42.470   00:04:12	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:30:42.470   00:04:12	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:30:42.470   00:04:12	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:30:42.470   00:04:12	-- bdev/blockdev.sh@309 -- # local nbd_all
00:30:42.470   00:04:12	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:30:42.470   00:04:12	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:30:42.470   00:04:12	-- bdev/blockdev.sh@312 -- # local nbd_list
00:30:42.470   00:04:12	-- bdev/blockdev.sh@313 -- # bdev_list=('raid5f')
00:30:42.470   00:04:12	-- bdev/blockdev.sh@313 -- # local bdev_list
00:30:42.470   00:04:12	-- bdev/blockdev.sh@316 -- # nbd_pid=141063
00:30:42.470   00:04:12	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:30:42.470   00:04:12	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:30:42.470   00:04:12	-- bdev/blockdev.sh@318 -- # waitforlisten 141063 /var/tmp/spdk-nbd.sock
00:30:42.470   00:04:12	-- common/autotest_common.sh@829 -- # '[' -z 141063 ']'
00:30:42.470   00:04:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:30:42.470   00:04:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:30:42.470  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:30:42.470   00:04:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:30:42.470   00:04:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:30:42.470   00:04:12	-- common/autotest_common.sh@10 -- # set +x
00:30:42.470  [2024-12-14 00:04:12.960350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:30:42.470  [2024-12-14 00:04:12.960555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:30:42.470  [2024-12-14 00:04:13.127557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:42.729  [2024-12-14 00:04:13.305921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:30:44.104   00:04:14	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:30:44.104   00:04:14	-- common/autotest_common.sh@862 -- # return 0
00:30:44.104   00:04:14	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f')
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f')
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@24 -- # local i
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:30:44.104    00:04:14	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:30:44.104    00:04:14	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:30:44.104   00:04:14	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:30:44.104   00:04:14	-- common/autotest_common.sh@867 -- # local i
00:30:44.104   00:04:14	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:30:44.104   00:04:14	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:30:44.104   00:04:14	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:30:44.104   00:04:14	-- common/autotest_common.sh@871 -- # break
00:30:44.104   00:04:14	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:30:44.104   00:04:14	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:30:44.104   00:04:14	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:30:44.104  1+0 records in
00:30:44.104  1+0 records out
00:30:44.104  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332409 s, 12.3 MB/s
00:30:44.104    00:04:14	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:30:44.104   00:04:14	-- common/autotest_common.sh@884 -- # size=4096
00:30:44.104   00:04:14	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:30:44.104   00:04:14	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:30:44.104   00:04:14	-- common/autotest_common.sh@887 -- # return 0
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:30:44.104   00:04:14	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:30:44.104    00:04:14	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:30:44.362    {
00:30:44.362      "nbd_device": "/dev/nbd0",
00:30:44.362      "bdev_name": "raid5f"
00:30:44.362    }
00:30:44.362  ]'
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:30:44.362    00:04:15	-- bdev/nbd_common.sh@119 -- # echo '[
00:30:44.362    {
00:30:44.362      "nbd_device": "/dev/nbd0",
00:30:44.362      "bdev_name": "raid5f"
00:30:44.362    }
00:30:44.362  ]'
00:30:44.362    00:04:15	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@51 -- # local i
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:30:44.362   00:04:15	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:30:44.619    00:04:15	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@41 -- # break
00:30:44.619   00:04:15	-- bdev/nbd_common.sh@45 -- # return 0
00:30:44.619    00:04:15	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:30:44.619    00:04:15	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:44.619     00:04:15	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:30:44.876    00:04:15	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:30:44.876     00:04:15	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:30:44.876     00:04:15	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:30:45.134    00:04:15	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:30:45.134     00:04:15	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:30:45.134     00:04:15	-- bdev/nbd_common.sh@65 -- # echo ''
00:30:45.134     00:04:15	-- bdev/nbd_common.sh@65 -- # true
00:30:45.134    00:04:15	-- bdev/nbd_common.sh@65 -- # count=0
00:30:45.134    00:04:15	-- bdev/nbd_common.sh@66 -- # echo 0
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@122 -- # count=0
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@127 -- # return 0
00:30:45.134   00:04:15	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f')
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f')
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@12 -- # local i
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0
00:30:45.134  /dev/nbd0
00:30:45.134    00:04:15	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:30:45.134   00:04:15	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:30:45.134   00:04:15	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:30:45.134   00:04:15	-- common/autotest_common.sh@867 -- # local i
00:30:45.134   00:04:15	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:30:45.135   00:04:15	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:30:45.135   00:04:15	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:30:45.135   00:04:15	-- common/autotest_common.sh@871 -- # break
00:30:45.135   00:04:15	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:30:45.135   00:04:15	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:30:45.135   00:04:15	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:30:45.135  1+0 records in
00:30:45.135  1+0 records out
00:30:45.135  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518944 s, 7.9 MB/s
00:30:45.135    00:04:15	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:30:45.135   00:04:15	-- common/autotest_common.sh@884 -- # size=4096
00:30:45.135   00:04:15	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:30:45.135   00:04:15	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:30:45.135   00:04:15	-- common/autotest_common.sh@887 -- # return 0
00:30:45.135   00:04:15	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:30:45.135   00:04:15	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:30:45.135    00:04:15	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:30:45.135    00:04:15	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:45.393     00:04:15	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:30:45.393    00:04:16	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:30:45.393    {
00:30:45.393      "nbd_device": "/dev/nbd0",
00:30:45.393      "bdev_name": "raid5f"
00:30:45.393    }
00:30:45.393  ]'
00:30:45.393     00:04:16	-- bdev/nbd_common.sh@64 -- # echo '[
00:30:45.393    {
00:30:45.393      "nbd_device": "/dev/nbd0",
00:30:45.393      "bdev_name": "raid5f"
00:30:45.393    }
00:30:45.393  ]'
00:30:45.393     00:04:16	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:30:45.393    00:04:16	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:30:45.393     00:04:16	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:30:45.393     00:04:16	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:30:45.393    00:04:16	-- bdev/nbd_common.sh@65 -- # count=1
00:30:45.393    00:04:16	-- bdev/nbd_common.sh@66 -- # echo 1
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@95 -- # count=1
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@71 -- # local operation=write
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:30:45.393  256+0 records in
00:30:45.393  256+0 records out
00:30:45.393  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00814232 s, 129 MB/s
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:30:45.393   00:04:16	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:30:45.652  256+0 records in
00:30:45.652  256+0 records out
00:30:45.652  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278954 s, 37.6 MB/s
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@51 -- # local i
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:30:45.652   00:04:16	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:30:45.910    00:04:16	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@41 -- # break
00:30:45.910   00:04:16	-- bdev/nbd_common.sh@45 -- # return 0
00:30:45.910    00:04:16	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:30:45.910    00:04:16	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:45.910     00:04:16	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:30:46.168    00:04:16	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:30:46.168     00:04:16	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:30:46.168     00:04:16	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:30:46.168    00:04:16	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:30:46.168     00:04:16	-- bdev/nbd_common.sh@65 -- # echo ''
00:30:46.168     00:04:16	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:30:46.168     00:04:16	-- bdev/nbd_common.sh@65 -- # true
00:30:46.168    00:04:16	-- bdev/nbd_common.sh@65 -- # count=0
00:30:46.168    00:04:16	-- bdev/nbd_common.sh@66 -- # echo 0
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@104 -- # count=0
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@109 -- # return 0
00:30:46.168   00:04:16	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:30:46.168   00:04:16	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:30:46.426  malloc_lvol_verify
00:30:46.426   00:04:16	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:30:46.426  b179296f-91c3-4637-bf75-50f809b2f665
00:30:46.426   00:04:17	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:30:46.685  a5b5a410-2e6a-49e7-9a14-f26451f116b7
00:30:46.685   00:04:17	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:30:46.943  /dev/nbd0
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:30:46.943  mke2fs 1.46.5 (30-Dec-2021)
00:30:46.943  
00:30:46.943  Filesystem too small for a journal
00:30:46.943  Discarding device blocks:    0/1024         done                            
00:30:46.943  Creating filesystem with 1024 4k blocks and 1024 inodes
00:30:46.943  
00:30:46.943  Allocating group tables: 0/1   done                            
00:30:46.943  Writing inode tables: 0/1   done                            
00:30:46.943  Writing superblocks and filesystem accounting information: 0/1   done
00:30:46.943  
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@51 -- # local i
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:30:46.943   00:04:17	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:30:47.201    00:04:17	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@41 -- # break
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@45 -- # return 0
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:30:47.201   00:04:17	-- bdev/nbd_common.sh@147 -- # return 0
00:30:47.201   00:04:17	-- bdev/blockdev.sh@324 -- # killprocess 141063
00:30:47.201   00:04:17	-- common/autotest_common.sh@936 -- # '[' -z 141063 ']'
00:30:47.201   00:04:17	-- common/autotest_common.sh@940 -- # kill -0 141063
00:30:47.201    00:04:17	-- common/autotest_common.sh@941 -- # uname
00:30:47.201   00:04:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:30:47.202    00:04:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141063
00:30:47.202   00:04:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:30:47.202   00:04:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:30:47.202   00:04:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 141063'
00:30:47.202  killing process with pid 141063
00:30:47.202   00:04:17	-- common/autotest_common.sh@955 -- # kill 141063
00:30:47.202   00:04:17	-- common/autotest_common.sh@960 -- # wait 141063
00:30:48.577   00:04:19	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:30:48.577  
00:30:48.577  real	0m6.166s
00:30:48.577  user	0m8.518s
00:30:48.577  sys	0m1.205s
00:30:48.577   00:04:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:30:48.577  ************************************
00:30:48.577   00:04:19	-- common/autotest_common.sh@10 -- # set +x
00:30:48.577  END TEST bdev_nbd
00:30:48.577  ************************************
00:30:48.577   00:04:19	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:30:48.577   00:04:19	-- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']'
00:30:48.577   00:04:19	-- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']'
00:30:48.577   00:04:19	-- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite ''
00:30:48.577   00:04:19	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:30:48.577   00:04:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:48.577   00:04:19	-- common/autotest_common.sh@10 -- # set +x
00:30:48.577  ************************************
00:30:48.577  START TEST bdev_fio
00:30:48.577  ************************************
00:30:48.577   00:04:19	-- common/autotest_common.sh@1114 -- # fio_test_suite ''
00:30:48.577   00:04:19	-- bdev/blockdev.sh@329 -- # local env_context
00:30:48.577   00:04:19	-- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:30:48.577  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:30:48.577   00:04:19	-- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:30:48.577    00:04:19	-- bdev/blockdev.sh@337 -- # echo ''
00:30:48.577    00:04:19	-- bdev/blockdev.sh@337 -- # sed s/--env-context=//
00:30:48.577   00:04:19	-- bdev/blockdev.sh@337 -- # env_context=
00:30:48.577   00:04:19	-- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:30:48.577   00:04:19	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:30:48.577   00:04:19	-- common/autotest_common.sh@1270 -- # local workload=verify
00:30:48.577   00:04:19	-- common/autotest_common.sh@1271 -- # local bdev_type=AIO
00:30:48.577   00:04:19	-- common/autotest_common.sh@1272 -- # local env_context=
00:30:48.577   00:04:19	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:30:48.577   00:04:19	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:30:48.577   00:04:19	-- common/autotest_common.sh@1280 -- # '[' -z verify ']'
00:30:48.578   00:04:19	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:30:48.578   00:04:19	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:30:48.578   00:04:19	-- common/autotest_common.sh@1290 -- # cat
00:30:48.578   00:04:19	-- common/autotest_common.sh@1302 -- # '[' verify == verify ']'
00:30:48.578   00:04:19	-- common/autotest_common.sh@1303 -- # cat
00:30:48.578   00:04:19	-- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']'
00:30:48.578    00:04:19	-- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version
00:30:48.578   00:04:19	-- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:30:48.578   00:04:19	-- common/autotest_common.sh@1314 -- # echo serialize_overlap=1
00:30:48.578   00:04:19	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:30:48.578   00:04:19	-- bdev/blockdev.sh@340 -- # echo '[job_raid5f]'
00:30:48.578   00:04:19	-- bdev/blockdev.sh@341 -- # echo filename=raid5f
00:30:48.578   00:04:19	-- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:30:48.578   00:04:19	-- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:30:48.578   00:04:19	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:30:48.578   00:04:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:30:48.578   00:04:19	-- common/autotest_common.sh@10 -- # set +x
00:30:48.578  ************************************
00:30:48.578  START TEST bdev_fio_rw_verify
00:30:48.578  ************************************
00:30:48.578   00:04:19	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:30:48.578   00:04:19	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:30:48.578   00:04:19	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:30:48.578   00:04:19	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:30:48.578   00:04:19	-- common/autotest_common.sh@1328 -- # local sanitizers
00:30:48.578   00:04:19	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:30:48.578   00:04:19	-- common/autotest_common.sh@1330 -- # shift
00:30:48.578   00:04:19	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:30:48.578   00:04:19	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:30:48.578    00:04:19	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:30:48.578    00:04:19	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:30:48.578    00:04:19	-- common/autotest_common.sh@1334 -- # grep libasan
00:30:48.578   00:04:19	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:30:48.578   00:04:19	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:30:48.578   00:04:19	-- common/autotest_common.sh@1336 -- # break
00:30:48.578   00:04:19	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:30:48.578   00:04:19	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:30:48.837  job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:30:48.837  fio-3.35
00:30:48.837  Starting 1 thread
00:31:01.044  
00:31:01.044  job_raid5f: (groupid=0, jobs=1): err= 0: pid=141307: Sat Dec 14 00:04:30 2024
00:31:01.044    read: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(462MiB/10001msec)
00:31:01.044      slat (usec): min=18, max=195, avg=20.47, stdev= 3.55
00:31:01.044      clat (usec): min=12, max=624, avg=135.94, stdev=50.91
00:31:01.044       lat (usec): min=33, max=741, avg=156.41, stdev=52.01
00:31:01.044      clat percentiles (usec):
00:31:01.044       | 50.000th=[  141], 99.000th=[  260], 99.900th=[  334], 99.990th=[  586],
00:31:01.044       | 99.999th=[  619]
00:31:01.044    write: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(478MiB/9870msec); 0 zone resets
00:31:01.044      slat (usec): min=8, max=206, avg=17.31, stdev= 3.79
00:31:01.044      clat (usec): min=60, max=1020, avg=307.22, stdev=47.87
00:31:01.044       lat (usec): min=76, max=1200, avg=324.53, stdev=49.51
00:31:01.044      clat percentiles (usec):
00:31:01.044       | 50.000th=[  310], 99.000th=[  498], 99.900th=[  553], 99.990th=[  725],
00:31:01.044       | 99.999th=[  963]
00:31:01.044     bw (  KiB/s): min=38544, max=51624, per=98.58%, avg=48912.84, stdev=3008.01, samples=19
00:31:01.044     iops        : min= 9636, max=12906, avg=12228.21, stdev=752.00, samples=19
00:31:01.044    lat (usec)   : 20=0.01%, 50=0.01%, 100=12.66%, 250=40.77%, 500=46.05%
00:31:01.044    lat (usec)   : 750=0.51%, 1000=0.01%
00:31:01.044    lat (msec)   : 2=0.01%
00:31:01.044    cpu          : usr=99.68%, sys=0.29%, ctx=91, majf=0, minf=8407
00:31:01.044    IO depths    : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0%
00:31:01.044       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:01.044       complete  : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:31:01.044       issued rwts: total=118283,122429,0,0 short=0,0,0,0 dropped=0,0,0,0
00:31:01.044       latency   : target=0, window=0, percentile=100.00%, depth=8
00:31:01.044  
00:31:01.044  Run status group 0 (all jobs):
00:31:01.044     READ: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=462MiB (484MB), run=10001-10001msec
00:31:01.044    WRITE: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=478MiB (501MB), run=9870-9870msec
00:31:01.044  -----------------------------------------------------
00:31:01.044  Suppressions used:
00:31:01.044    count      bytes template
00:31:01.044        1          7 /usr/src/fio/parse.c
00:31:01.044      495      47520 /usr/src/fio/iolog.c
00:31:01.044        1        904 libcrypto.so
00:31:01.044  -----------------------------------------------------
00:31:01.044  
00:31:01.044  
00:31:01.044  real	0m12.325s
00:31:01.044  user	0m12.864s
00:31:01.044  sys	0m0.564s
00:31:01.044   00:04:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:01.044   00:04:31	-- common/autotest_common.sh@10 -- # set +x
00:31:01.044  ************************************
00:31:01.044  END TEST bdev_fio_rw_verify
00:31:01.044  ************************************
00:31:01.044   00:04:31	-- bdev/blockdev.sh@348 -- # rm -f
00:31:01.044   00:04:31	-- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:31:01.044   00:04:31	-- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:31:01.044   00:04:31	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:31:01.044   00:04:31	-- common/autotest_common.sh@1270 -- # local workload=trim
00:31:01.044   00:04:31	-- common/autotest_common.sh@1271 -- # local bdev_type=
00:31:01.044   00:04:31	-- common/autotest_common.sh@1272 -- # local env_context=
00:31:01.044   00:04:31	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:31:01.044   00:04:31	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1280 -- # '[' -z trim ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:31:01.045   00:04:31	-- common/autotest_common.sh@1290 -- # cat
00:31:01.045   00:04:31	-- common/autotest_common.sh@1302 -- # '[' trim == verify ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1317 -- # '[' trim == trim ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1318 -- # echo rw=trimwrite
00:31:01.045    00:04:31	-- bdev/blockdev.sh@353 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "20f9601f-bcc7-488f-b511-14a34bd98312"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "20f9601f-bcc7-488f-b511-14a34bd98312",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "20f9601f-bcc7-488f-b511-14a34bd98312",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "0f140044-89ed-42af-84ba-32e3bdf8138a",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "2398c269-5f3b-49dc-a9dd-8cd60f346fa3",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "b49e2b36-e45d-4849-a954-ed0048e25194",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:31:01.045    00:04:31	-- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:31:01.045   00:04:31	-- bdev/blockdev.sh@353 -- # [[ -n '' ]]
00:31:01.045   00:04:31	-- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:31:01.045  /home/vagrant/spdk_repo/spdk
00:31:01.045   00:04:31	-- bdev/blockdev.sh@360 -- # popd
00:31:01.045   00:04:31	-- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT
00:31:01.045   00:04:31	-- bdev/blockdev.sh@362 -- # return 0
00:31:01.045  
00:31:01.045  real	0m12.506s
00:31:01.045  user	0m12.992s
00:31:01.045  sys	0m0.618s
00:31:01.045   00:04:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:01.045   00:04:31	-- common/autotest_common.sh@10 -- # set +x
00:31:01.045  ************************************
00:31:01.045  END TEST bdev_fio
00:31:01.045  ************************************
00:31:01.045   00:04:31	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:31:01.045   00:04:31	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:31:01.045   00:04:31	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:31:01.045   00:04:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:31:01.045   00:04:31	-- common/autotest_common.sh@10 -- # set +x
00:31:01.045  ************************************
00:31:01.045  START TEST bdev_verify
00:31:01.045  ************************************
00:31:01.045   00:04:31	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:31:01.045  [2024-12-14 00:04:31.738302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:31:01.045  [2024-12-14 00:04:31.738508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141475 ]
00:31:01.303  [2024-12-14 00:04:31.909620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:31:01.561  [2024-12-14 00:04:32.091104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:31:01.562  [2024-12-14 00:04:32.091123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:31:02.129  Running I/O for 5 seconds...
00:31:07.398  
00:31:07.398                                                                                                  Latency(us)
00:31:07.398  
[2024-12-14T00:04:38.130Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:07.398  
[2024-12-14T00:04:38.130Z]  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:31:07.398  	 Verification LBA range: start 0x0 length 0x2000
00:31:07.398  	 raid5f              :       5.02    8332.37      32.55       0.00     0.00   24356.52     659.08   19065.02
00:31:07.398  
[2024-12-14T00:04:38.130Z]  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:31:07.398  	 Verification LBA range: start 0x2000 length 0x2000
00:31:07.398  	 raid5f              :       5.02    8268.23      32.30       0.00     0.00   24542.64     510.14   19422.49
00:31:07.398  
[2024-12-14T00:04:38.130Z]  ===================================================================================================================
00:31:07.398  
[2024-12-14T00:04:38.130Z]  Total                       :              16600.61      64.85       0.00     0.00   24449.22     510.14   19422.49
00:31:08.333  
00:31:08.333  real	0m7.132s
00:31:08.333  user	0m13.040s
00:31:08.333  sys	0m0.320s
00:31:08.333   00:04:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:08.333   00:04:38	-- common/autotest_common.sh@10 -- # set +x
00:31:08.333  ************************************
00:31:08.333  END TEST bdev_verify
00:31:08.333  ************************************
00:31:08.333   00:04:38	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:31:08.333   00:04:38	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:31:08.333   00:04:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:31:08.333   00:04:38	-- common/autotest_common.sh@10 -- # set +x
00:31:08.333  ************************************
00:31:08.333  START TEST bdev_verify_big_io
00:31:08.333  ************************************
00:31:08.333   00:04:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:31:08.333  [2024-12-14 00:04:38.921478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:31:08.333  [2024-12-14 00:04:38.921701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141583 ]
00:31:08.592  [2024-12-14 00:04:39.094231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:31:08.592  [2024-12-14 00:04:39.282686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:31:08.592  [2024-12-14 00:04:39.282703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:31:09.159  Running I/O for 5 seconds...
00:31:14.482  
00:31:14.482                                                                                                  Latency(us)
00:31:14.482  
[2024-12-14T00:04:45.214Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:14.482  
[2024-12-14T00:04:45.214Z]  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:31:14.482  	 Verification LBA range: start 0x0 length 0x200
00:31:14.482  	 raid5f              :       5.18     600.26      37.52       0.00     0.00 5566929.93     350.02  192556.68
00:31:14.482  
[2024-12-14T00:04:45.214Z]  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:31:14.482  	 Verification LBA range: start 0x200 length 0x200
00:31:14.482  	 raid5f              :       5.18     591.74      36.98       0.00     0.00 5639891.98     172.22  191603.43
00:31:14.482  
[2024-12-14T00:04:45.214Z]  ===================================================================================================================
00:31:14.482  
[2024-12-14T00:04:45.214Z]  Total                       :               1192.00      74.50       0.00     0.00 5603162.79     172.22  192556.68
00:31:15.446  
00:31:15.446  real	0m7.309s
00:31:15.446  user	0m13.383s
00:31:15.446  sys	0m0.328s
00:31:15.446   00:04:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:15.446   00:04:46	-- common/autotest_common.sh@10 -- # set +x
00:31:15.446  ************************************
00:31:15.446  END TEST bdev_verify_big_io
00:31:15.446  ************************************
00:31:15.705   00:04:46	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:15.705   00:04:46	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:31:15.705   00:04:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:31:15.705   00:04:46	-- common/autotest_common.sh@10 -- # set +x
00:31:15.705  ************************************
00:31:15.705  START TEST bdev_write_zeroes
00:31:15.705  ************************************
00:31:15.705   00:04:46	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:15.705  [2024-12-14 00:04:46.285530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:31:15.705  [2024-12-14 00:04:46.285750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141691 ]
00:31:15.964  [2024-12-14 00:04:46.454963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:15.964  [2024-12-14 00:04:46.642829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:31:16.531  Running I/O for 1 seconds...
00:31:17.466  
00:31:17.466                                                                                                  Latency(us)
00:31:17.466  
[2024-12-14T00:04:48.198Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:17.466  
[2024-12-14T00:04:48.198Z]  Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:31:17.466  	 raid5f              :       1.00   28024.11     109.47       0.00     0.00    4554.95    1370.30    4974.78
00:31:17.466  
[2024-12-14T00:04:48.198Z]  ===================================================================================================================
00:31:17.466  
[2024-12-14T00:04:48.198Z]  Total                       :              28024.11     109.47       0.00     0.00    4554.95    1370.30    4974.78
00:31:18.843  
00:31:18.843  real	0m3.117s
00:31:18.843  user	0m2.690s
00:31:18.843  sys	0m0.312s
00:31:18.843   00:04:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:18.843   00:04:49	-- common/autotest_common.sh@10 -- # set +x
00:31:18.843  ************************************
00:31:18.843  END TEST bdev_write_zeroes
00:31:18.843  ************************************
00:31:18.843   00:04:49	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:18.843   00:04:49	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:31:18.843   00:04:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:31:18.843   00:04:49	-- common/autotest_common.sh@10 -- # set +x
00:31:18.843  ************************************
00:31:18.843  START TEST bdev_json_nonenclosed
00:31:18.843  ************************************
00:31:18.843   00:04:49	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:18.843  [2024-12-14 00:04:49.460969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:31:18.843  [2024-12-14 00:04:49.461157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141755 ]
00:31:19.102  [2024-12-14 00:04:49.627878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:19.102  [2024-12-14 00:04:49.811462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:31:19.102  [2024-12-14 00:04:49.811701] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:31:19.102  [2024-12-14 00:04:49.811751] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:31:19.669  
00:31:19.669  real	0m0.747s
00:31:19.669  user	0m0.490s
00:31:19.669  sys	0m0.157s
00:31:19.669   00:04:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:19.669   00:04:50	-- common/autotest_common.sh@10 -- # set +x
00:31:19.669  ************************************
00:31:19.669  END TEST bdev_json_nonenclosed
00:31:19.669  ************************************
00:31:19.669   00:04:50	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:19.669   00:04:50	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:31:19.669   00:04:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:31:19.669   00:04:50	-- common/autotest_common.sh@10 -- # set +x
00:31:19.669  ************************************
00:31:19.669  START TEST bdev_json_nonarray
00:31:19.669  ************************************
00:31:19.669   00:04:50	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:31:19.669  [2024-12-14 00:04:50.262831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:31:19.669  [2024-12-14 00:04:50.263046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141784 ]
00:31:19.927  [2024-12-14 00:04:50.432559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:19.927  [2024-12-14 00:04:50.618776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:31:19.927  [2024-12-14 00:04:50.619003] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:31:19.927  [2024-12-14 00:04:50.619041] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:31:20.495  
00:31:20.495  real	0m0.752s
00:31:20.495  user	0m0.508s
00:31:20.495  sys	0m0.144s
00:31:20.495   00:04:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:20.495   00:04:50	-- common/autotest_common.sh@10 -- # set +x
00:31:20.495  ************************************
00:31:20.495  END TEST bdev_json_nonarray
00:31:20.495  ************************************
00:31:20.495   00:04:50	-- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:31:20.495   00:04:50	-- bdev/blockdev.sh@809 -- # cleanup
00:31:20.495   00:04:50	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:31:20.495   00:04:50	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:31:20.495   00:04:50	-- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]]
00:31:20.495   00:04:50	-- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]]
00:31:20.495  
00:31:20.495  real	0m47.097s
00:31:20.495  user	1m3.587s
00:31:20.495  sys	0m4.745s
00:31:20.495   00:04:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:31:20.495   00:04:51	-- common/autotest_common.sh@10 -- # set +x
00:31:20.495  ************************************
00:31:20.495  END TEST blockdev_raid5f
00:31:20.495  ************************************
00:31:20.495   00:04:51	-- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:31:20.495   00:04:51	-- spdk/autotest.sh@372 -- # timing_enter post_cleanup
00:31:20.495   00:04:51	-- common/autotest_common.sh@722 -- # xtrace_disable
00:31:20.495   00:04:51	-- common/autotest_common.sh@10 -- # set +x
00:31:20.495   00:04:51	-- spdk/autotest.sh@373 -- # autotest_cleanup
00:31:20.495   00:04:51	-- common/autotest_common.sh@1381 -- # local autotest_es=0
00:31:20.495   00:04:51	-- common/autotest_common.sh@1382 -- # xtrace_disable
00:31:20.495   00:04:51	-- common/autotest_common.sh@10 -- # set +x
00:31:22.400  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:31:22.400  Waiting for block devices as requested
00:31:22.400  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:31:22.659  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:31:22.659  Cleaning
00:31:22.659  Removing:    /var/run/dpdk/spdk0/config
00:31:22.659  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:31:22.659  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:31:22.659  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:31:22.659  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:31:22.659  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:31:22.659  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:31:22.659  Removing:    /dev/shm/spdk_tgt_trace.pid102821
00:31:22.659  Removing:    /var/run/dpdk/spdk0
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid102571
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid102821
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103142
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103399
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103582
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103706
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103827
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid103961
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104087
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104135
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104185
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104269
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104394
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid104942
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105025
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105113
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105141
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105288
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105311
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105458
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105493
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105562
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105587
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105658
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105695
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105895
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105942
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid105990
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106076
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106172
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106211
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106306
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106341
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106393
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106432
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106487
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106522
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106567
00:31:22.659  Removing:    /var/run/dpdk/spdk_pid106609
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106661
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106703
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106748
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106783
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106840
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106879
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106931
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid106966
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107017
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107060
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107105
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107138
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107190
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107231
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107284
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107319
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107364
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107408
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107458
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107493
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107545
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107580
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107634
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107674
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107719
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107764
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107814
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107865
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107913
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid107955
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108002
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108042
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108096
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108194
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108331
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108533
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108625
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid108687
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid109911
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110132
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110339
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110464
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110608
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110685
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110717
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid110755
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid111232
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid111319
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid111441
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid111504
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid112707
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid113603
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid114488
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid115595
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid116664
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid117742
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid119233
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid120428
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid121631
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid122299
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid122839
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid123461
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid123916
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid124451
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid125002
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid125661
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid126166
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid127547
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid128140
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid128686
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid130193
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid130851
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid131468
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132240
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132295
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132350
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132411
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132569
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132713
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid132950
00:31:22.918  Removing:    /var/run/dpdk/spdk_pid133263
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133279
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133341
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133361
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133393
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133434
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133456
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133488
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133516
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133548
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133578
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133610
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133630
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133662
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133690
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133720
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133745
00:31:23.177  Removing:    /var/run/dpdk/spdk_pid133777
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid133804
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid133832
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid133877
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid133908
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid133957
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134034
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134087
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134115
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134153
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134185
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134267
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134331
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134358
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134401
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134427
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134453
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134475
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134499
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134527
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134545
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134569
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134621
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134662
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134694
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134740
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134761
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134783
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134846
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134873
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134916
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134949
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134966
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid134994
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135021
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135038
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135066
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135091
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135197
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135281
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135439
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135467
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135526
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135587
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135636
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135666
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135700
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135749
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135778
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135874
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135940
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid135991
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136270
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136408
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136456
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136558
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136649
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136693
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid136950
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137088
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137196
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137253
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137292
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137387
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137822
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid137863
00:31:23.178  Removing:    /var/run/dpdk/spdk_pid138182
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid138302
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid138404
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid138467
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid138499
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid138528
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid139901
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140034
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140047
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140070
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140582
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140696
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140869
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140941
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid140991
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141293
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141475
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141583
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141691
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141755
00:31:23.437  Removing:    /var/run/dpdk/spdk_pid141784
00:31:23.437  Clean
00:31:23.437  killing process with pid 92530
00:31:23.437  killing process with pid 92531
00:31:23.437   00:04:54	-- common/autotest_common.sh@1446 -- # return 0
00:31:23.437   00:04:54	-- spdk/autotest.sh@374 -- # timing_exit post_cleanup
00:31:23.437   00:04:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:31:23.437   00:04:54	-- common/autotest_common.sh@10 -- # set +x
00:31:23.696   00:04:54	-- spdk/autotest.sh@376 -- # timing_exit autotest
00:31:23.696   00:04:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:31:23.696   00:04:54	-- common/autotest_common.sh@10 -- # set +x
00:31:23.696   00:04:54	-- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:31:23.696   00:04:54	-- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:31:23.696   00:04:54	-- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:31:23.696   00:04:54	-- spdk/autotest.sh@381 -- # [[ y == y ]]
00:31:23.696    00:04:54	-- spdk/autotest.sh@383 -- # hostname
00:31:23.696   00:04:54	-- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:31:23.955  geninfo: WARNING: invalid characters removed from testname!
00:32:02.680   00:05:31	-- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:05.968   00:05:36	-- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:09.256   00:05:39	-- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:11.790   00:05:42	-- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:15.078   00:05:45	-- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:17.642   00:05:47	-- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:20.176   00:05:50	-- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:32:20.176     00:05:50	-- common/autotest_common.sh@1689 -- $ [[ y == y ]]
00:32:20.440      00:05:50	-- common/autotest_common.sh@1690 -- $ lcov --version
00:32:20.440      00:05:50	-- common/autotest_common.sh@1690 -- $ awk '{print $NF}'
00:32:20.440     00:05:50	-- common/autotest_common.sh@1690 -- $ lt 1.15 2
00:32:20.440     00:05:50	-- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2
00:32:20.440     00:05:50	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:32:20.440     00:05:50	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:32:20.440     00:05:50	-- scripts/common.sh@335 -- $ IFS=.-:
00:32:20.440     00:05:50	-- scripts/common.sh@335 -- $ read -ra ver1
00:32:20.440     00:05:50	-- scripts/common.sh@336 -- $ IFS=.-:
00:32:20.440     00:05:50	-- scripts/common.sh@336 -- $ read -ra ver2
00:32:20.440     00:05:50	-- scripts/common.sh@337 -- $ local 'op=<'
00:32:20.440     00:05:50	-- scripts/common.sh@339 -- $ ver1_l=2
00:32:20.440     00:05:50	-- scripts/common.sh@340 -- $ ver2_l=1
00:32:20.440     00:05:50	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:32:20.440     00:05:50	-- scripts/common.sh@343 -- $ case "$op" in
00:32:20.440     00:05:50	-- scripts/common.sh@344 -- $ : 1
00:32:20.440     00:05:50	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:32:20.440     00:05:50	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:20.440      00:05:50	-- scripts/common.sh@364 -- $ decimal 1
00:32:20.440      00:05:50	-- scripts/common.sh@352 -- $ local d=1
00:32:20.440      00:05:50	-- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:32:20.440      00:05:50	-- scripts/common.sh@354 -- $ echo 1
00:32:20.440     00:05:50	-- scripts/common.sh@364 -- $ ver1[v]=1
00:32:20.440      00:05:50	-- scripts/common.sh@365 -- $ decimal 2
00:32:20.440      00:05:50	-- scripts/common.sh@352 -- $ local d=2
00:32:20.440      00:05:50	-- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:32:20.440      00:05:51	-- scripts/common.sh@354 -- $ echo 2
00:32:20.440     00:05:51	-- scripts/common.sh@365 -- $ ver2[v]=2
00:32:20.440     00:05:51	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:32:20.440     00:05:51	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:32:20.440     00:05:51	-- scripts/common.sh@367 -- $ return 0
00:32:20.440     00:05:51	-- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:20.440     00:05:51	-- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS=
00:32:20.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:20.440  		--rc genhtml_branch_coverage=1
00:32:20.440  		--rc genhtml_function_coverage=1
00:32:20.440  		--rc genhtml_legend=1
00:32:20.440  		--rc geninfo_all_blocks=1
00:32:20.440  		--rc geninfo_unexecuted_blocks=1
00:32:20.440  		
00:32:20.440  		'
00:32:20.440     00:05:51	-- common/autotest_common.sh@1703 -- $ LCOV_OPTS='
00:32:20.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:20.440  		--rc genhtml_branch_coverage=1
00:32:20.440  		--rc genhtml_function_coverage=1
00:32:20.440  		--rc genhtml_legend=1
00:32:20.440  		--rc geninfo_all_blocks=1
00:32:20.440  		--rc geninfo_unexecuted_blocks=1
00:32:20.440  		
00:32:20.440  		'
00:32:20.440     00:05:51	-- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 
00:32:20.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:20.440  		--rc genhtml_branch_coverage=1
00:32:20.440  		--rc genhtml_function_coverage=1
00:32:20.440  		--rc genhtml_legend=1
00:32:20.440  		--rc geninfo_all_blocks=1
00:32:20.440  		--rc geninfo_unexecuted_blocks=1
00:32:20.440  		
00:32:20.440  		'
00:32:20.440     00:05:51	-- common/autotest_common.sh@1704 -- $ LCOV='lcov 
00:32:20.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:20.440  		--rc genhtml_branch_coverage=1
00:32:20.440  		--rc genhtml_function_coverage=1
00:32:20.440  		--rc genhtml_legend=1
00:32:20.440  		--rc geninfo_all_blocks=1
00:32:20.440  		--rc geninfo_unexecuted_blocks=1
00:32:20.440  		
00:32:20.440  		'
00:32:20.440    00:05:51	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:32:20.440     00:05:51	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:32:20.440     00:05:51	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:20.440     00:05:51	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:20.440      00:05:51	-- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:32:20.440      00:05:51	-- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:32:20.440      00:05:51	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:32:20.440      00:05:51	-- paths/export.sh@5 -- $ export PATH
00:32:20.440      00:05:51	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:32:20.440    00:05:51	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:32:20.440      00:05:51	-- common/autobuild_common.sh@440 -- $ date +%s
00:32:20.440     00:05:51	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734134751.XXXXXX
00:32:20.440    00:05:51	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734134751.ww7uVy
00:32:20.440    00:05:51	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:32:20.440    00:05:51	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:32:20.440    00:05:51	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:32:20.440    00:05:51	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:32:20.440    00:05:51	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:32:20.440     00:05:51	-- common/autobuild_common.sh@456 -- $ get_config_params
00:32:20.440     00:05:51	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:32:20.440     00:05:51	-- common/autotest_common.sh@10 -- $ set +x
00:32:20.440    00:05:51	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f'
00:32:20.440   00:05:51	-- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10
00:32:20.440   00:05:51	-- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk
00:32:20.440   00:05:51	-- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]]
00:32:20.440   00:05:51	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:32:20.440   00:05:51	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:32:20.440   00:05:51	-- spdk/autopackage.sh@23 -- $ timing_enter build_release
00:32:20.440   00:05:51	-- common/autotest_common.sh@722 -- $ xtrace_disable
00:32:20.440   00:05:51	-- common/autotest_common.sh@10 -- $ set +x
00:32:20.440   00:05:51	-- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]]
00:32:20.440   00:05:51	-- spdk/autopackage.sh@36 -- $ [[ -n '' ]]
00:32:20.440    00:05:51	-- spdk/autopackage.sh@40 -- $ get_config_params
00:32:20.440    00:05:51	-- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g
00:32:20.440    00:05:51	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:32:20.440    00:05:51	-- common/autotest_common.sh@10 -- $ set +x
00:32:20.440   00:05:51	-- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f'
00:32:20.440   00:05:51	-- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto
00:32:20.440  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:32:20.440  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:32:20.700  Using 'verbs' RDMA provider
00:32:33.479  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done.
00:32:45.688  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:32:45.688  Creating mk/config.mk...done.
00:32:45.688  Creating mk/cc.flags.mk...done.
00:32:45.688  Type 'make' to build.
00:32:45.688   00:06:14	-- spdk/autopackage.sh@43 -- $ make -j10
00:32:45.688  make[1]: Nothing to be done for 'all'.
00:32:48.972  The Meson build system
00:32:48.972  Version: 1.4.0
00:32:48.972  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:32:48.972  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:32:48.972  Build type: native build
00:32:48.972  Program cat found: YES (/usr/bin/cat)
00:32:48.972  Project name: DPDK
00:32:48.972  Project version: 23.11.0
00:32:48.972  C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0")
00:32:48.972  C linker for the host machine: cc ld.bfd 2.38
00:32:48.972  Host machine cpu family: x86_64
00:32:48.972  Host machine cpu: x86_64
00:32:48.972  Message: ## Building in Developer Mode ##
00:32:48.972  Program pkg-config found: YES (/usr/bin/pkg-config)
00:32:48.972  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:32:48.972  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:32:48.972  Program python3 found: YES (/usr/bin/python3)
00:32:48.972  Program cat found: YES (/usr/bin/cat)
00:32:48.972  Compiler for C supports arguments -march=native: YES 
00:32:48.972  Checking for size of "void *" : 8 
00:32:48.972  Checking for size of "void *" : 8 (cached)
00:32:48.972  Library m found: YES
00:32:48.972  Library numa found: YES
00:32:48.972  Has header "numaif.h" : YES 
00:32:48.972  Library fdt found: NO
00:32:48.972  Library execinfo found: NO
00:32:48.972  Has header "execinfo.h" : YES 
00:32:48.972  Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2
00:32:48.972  Run-time dependency libarchive found: NO (tried pkgconfig)
00:32:48.972  Run-time dependency libbsd found: NO (tried pkgconfig)
00:32:48.972  Run-time dependency jansson found: NO (tried pkgconfig)
00:32:48.972  Run-time dependency openssl found: YES 3.0.2
00:32:48.972  Run-time dependency libpcap found: NO (tried pkgconfig)
00:32:48.972  Library pcap found: NO
00:32:48.972  Compiler for C supports arguments -Wcast-qual: YES 
00:32:48.972  Compiler for C supports arguments -Wdeprecated: YES 
00:32:48.972  Compiler for C supports arguments -Wformat: YES 
00:32:48.972  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:32:48.972  Compiler for C supports arguments -Wformat-security: YES 
00:32:48.972  Compiler for C supports arguments -Wmissing-declarations: YES 
00:32:48.972  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:32:48.972  Compiler for C supports arguments -Wnested-externs: YES 
00:32:48.972  Compiler for C supports arguments -Wold-style-definition: YES 
00:32:48.972  Compiler for C supports arguments -Wpointer-arith: YES 
00:32:48.972  Compiler for C supports arguments -Wsign-compare: YES 
00:32:48.972  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:32:48.972  Compiler for C supports arguments -Wundef: YES 
00:32:48.972  Compiler for C supports arguments -Wwrite-strings: YES 
00:32:48.973  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:32:48.973  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:32:48.973  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:32:48.973  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:32:48.973  Program objdump found: YES (/usr/bin/objdump)
00:32:48.973  Compiler for C supports arguments -mavx512f: YES 
00:32:48.973  Checking if "AVX512 checking" compiles: YES 
00:32:48.973  Fetching value of define "__SSE4_2__" : 1 
00:32:48.973  Fetching value of define "__AES__" : 1 
00:32:48.973  Fetching value of define "__AVX__" : 1 
00:32:48.973  Fetching value of define "__AVX2__" : 1 
00:32:48.973  Fetching value of define "__AVX512BW__" : (undefined) 
00:32:48.973  Fetching value of define "__AVX512CD__" : (undefined) 
00:32:48.973  Fetching value of define "__AVX512DQ__" : (undefined) 
00:32:48.973  Fetching value of define "__AVX512F__" : (undefined) 
00:32:48.973  Fetching value of define "__AVX512VL__" : (undefined) 
00:32:48.973  Fetching value of define "__PCLMUL__" : 1 
00:32:48.973  Fetching value of define "__RDRND__" : 1 
00:32:48.973  Fetching value of define "__RDSEED__" : 1 
00:32:48.973  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:32:48.973  Fetching value of define "__znver1__" : (undefined) 
00:32:48.973  Fetching value of define "__znver2__" : (undefined) 
00:32:48.973  Fetching value of define "__znver3__" : (undefined) 
00:32:48.973  Fetching value of define "__znver4__" : (undefined) 
00:32:48.973  Compiler for C supports arguments -ffat-lto-objects: YES 
00:32:48.973  Library asan found: YES
00:32:48.973  Compiler for C supports arguments -Wno-format-truncation: YES 
00:32:48.973  Message: lib/log: Defining dependency "log"
00:32:48.973  Message: lib/kvargs: Defining dependency "kvargs"
00:32:48.973  Message: lib/telemetry: Defining dependency "telemetry"
00:32:48.973  Library rt found: YES
00:32:48.973  Checking for function "getentropy" : NO 
00:32:48.973  Message: lib/eal: Defining dependency "eal"
00:32:48.973  Message: lib/ring: Defining dependency "ring"
00:32:48.973  Message: lib/rcu: Defining dependency "rcu"
00:32:48.973  Message: lib/mempool: Defining dependency "mempool"
00:32:48.973  Message: lib/mbuf: Defining dependency "mbuf"
00:32:48.973  Fetching value of define "__PCLMUL__" : 1 (cached)
00:32:48.973  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:32:48.973  Compiler for C supports arguments -mpclmul: YES 
00:32:48.973  Compiler for C supports arguments -maes: YES 
00:32:48.973  Compiler for C supports arguments -mavx512f: YES (cached)
00:32:48.973  Compiler for C supports arguments -mavx512bw: YES 
00:32:48.973  Compiler for C supports arguments -mavx512dq: YES 
00:32:48.973  Compiler for C supports arguments -mavx512vl: YES 
00:32:48.973  Compiler for C supports arguments -mvpclmulqdq: YES 
00:32:48.973  Compiler for C supports arguments -mavx2: YES 
00:32:48.973  Compiler for C supports arguments -mavx: YES 
00:32:48.973  Message: lib/net: Defining dependency "net"
00:32:48.973  Message: lib/meter: Defining dependency "meter"
00:32:48.973  Message: lib/ethdev: Defining dependency "ethdev"
00:32:48.973  Message: lib/pci: Defining dependency "pci"
00:32:48.973  Message: lib/cmdline: Defining dependency "cmdline"
00:32:48.973  Message: lib/hash: Defining dependency "hash"
00:32:48.973  Message: lib/timer: Defining dependency "timer"
00:32:48.973  Message: lib/compressdev: Defining dependency "compressdev"
00:32:48.973  Message: lib/cryptodev: Defining dependency "cryptodev"
00:32:48.973  Message: lib/dmadev: Defining dependency "dmadev"
00:32:48.973  Compiler for C supports arguments -Wno-cast-qual: YES 
00:32:48.973  Message: lib/power: Defining dependency "power"
00:32:48.973  Message: lib/reorder: Defining dependency "reorder"
00:32:48.973  Message: lib/security: Defining dependency "security"
00:32:48.973  Has header "linux/userfaultfd.h" : YES 
00:32:48.973  Has header "linux/vduse.h" : YES 
00:32:48.973  Message: lib/vhost: Defining dependency "vhost"
00:32:48.973  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:32:48.973  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:32:48.973  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:32:48.973  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:32:48.973  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:32:48.973  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:32:48.973  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:32:48.973  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:32:48.973  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:32:48.973  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:32:48.973  Program doxygen found: YES (/usr/bin/doxygen)
00:32:48.973  Configuring doxy-api-html.conf using configuration
00:32:48.973  Configuring doxy-api-man.conf using configuration
00:32:48.973  Program mandb found: YES (/usr/bin/mandb)
00:32:48.973  Program sphinx-build found: NO
00:32:48.973  Configuring rte_build_config.h using configuration
00:32:48.973  Message: 
00:32:48.973  =================
00:32:48.973  Applications Enabled
00:32:48.973  =================
00:32:48.973  
00:32:48.973  apps:
00:32:48.973  	
00:32:48.973  
00:32:48.973  Message: 
00:32:48.973  =================
00:32:48.973  Libraries Enabled
00:32:48.973  =================
00:32:48.973  
00:32:48.973  libs:
00:32:48.973  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:32:48.973  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:32:48.973  	cryptodev, dmadev, power, reorder, security, vhost, 
00:32:48.973  
00:32:48.973  Message: 
00:32:48.973  ===============
00:32:48.973  Drivers Enabled
00:32:48.973  ===============
00:32:48.973  
00:32:48.973  common:
00:32:48.973  	
00:32:48.973  bus:
00:32:48.973  	pci, vdev, 
00:32:48.973  mempool:
00:32:48.973  	ring, 
00:32:48.973  dma:
00:32:48.973  	
00:32:48.973  net:
00:32:48.973  	
00:32:48.973  crypto:
00:32:48.973  	
00:32:48.973  compress:
00:32:48.973  	
00:32:48.973  vdpa:
00:32:48.973  	
00:32:48.973  
00:32:48.973  Message: 
00:32:48.973  =================
00:32:48.973  Content Skipped
00:32:48.973  =================
00:32:48.973  
00:32:48.973  apps:
00:32:48.973  	dumpcap:	explicitly disabled via build config
00:32:48.973  	graph:	explicitly disabled via build config
00:32:48.973  	pdump:	explicitly disabled via build config
00:32:48.973  	proc-info:	explicitly disabled via build config
00:32:48.973  	test-acl:	explicitly disabled via build config
00:32:48.973  	test-bbdev:	explicitly disabled via build config
00:32:48.973  	test-cmdline:	explicitly disabled via build config
00:32:48.973  	test-compress-perf:	explicitly disabled via build config
00:32:48.973  	test-crypto-perf:	explicitly disabled via build config
00:32:48.973  	test-dma-perf:	explicitly disabled via build config
00:32:48.973  	test-eventdev:	explicitly disabled via build config
00:32:48.973  	test-fib:	explicitly disabled via build config
00:32:48.973  	test-flow-perf:	explicitly disabled via build config
00:32:48.973  	test-gpudev:	explicitly disabled via build config
00:32:48.973  	test-mldev:	explicitly disabled via build config
00:32:48.973  	test-pipeline:	explicitly disabled via build config
00:32:48.973  	test-pmd:	explicitly disabled via build config
00:32:48.973  	test-regex:	explicitly disabled via build config
00:32:48.973  	test-sad:	explicitly disabled via build config
00:32:48.973  	test-security-perf:	explicitly disabled via build config
00:32:48.973  	
00:32:48.973  libs:
00:32:48.973  	metrics:	explicitly disabled via build config
00:32:48.973  	acl:	explicitly disabled via build config
00:32:48.973  	bbdev:	explicitly disabled via build config
00:32:48.973  	bitratestats:	explicitly disabled via build config
00:32:48.973  	bpf:	explicitly disabled via build config
00:32:48.973  	cfgfile:	explicitly disabled via build config
00:32:48.973  	distributor:	explicitly disabled via build config
00:32:48.973  	efd:	explicitly disabled via build config
00:32:48.973  	eventdev:	explicitly disabled via build config
00:32:48.973  	dispatcher:	explicitly disabled via build config
00:32:48.973  	gpudev:	explicitly disabled via build config
00:32:48.973  	gro:	explicitly disabled via build config
00:32:48.973  	gso:	explicitly disabled via build config
00:32:48.973  	ip_frag:	explicitly disabled via build config
00:32:48.973  	jobstats:	explicitly disabled via build config
00:32:48.973  	latencystats:	explicitly disabled via build config
00:32:48.973  	lpm:	explicitly disabled via build config
00:32:48.973  	member:	explicitly disabled via build config
00:32:48.973  	pcapng:	explicitly disabled via build config
00:32:48.973  	rawdev:	explicitly disabled via build config
00:32:48.973  	regexdev:	explicitly disabled via build config
00:32:48.973  	mldev:	explicitly disabled via build config
00:32:48.973  	rib:	explicitly disabled via build config
00:32:48.973  	sched:	explicitly disabled via build config
00:32:48.973  	stack:	explicitly disabled via build config
00:32:48.973  	ipsec:	explicitly disabled via build config
00:32:48.973  	pdcp:	explicitly disabled via build config
00:32:48.973  	fib:	explicitly disabled via build config
00:32:48.973  	port:	explicitly disabled via build config
00:32:48.973  	pdump:	explicitly disabled via build config
00:32:48.973  	table:	explicitly disabled via build config
00:32:48.973  	pipeline:	explicitly disabled via build config
00:32:48.973  	graph:	explicitly disabled via build config
00:32:48.973  	node:	explicitly disabled via build config
00:32:48.973  	
00:32:48.973  drivers:
00:32:48.973  	common/cpt:	not in enabled drivers build config
00:32:48.973  	common/dpaax:	not in enabled drivers build config
00:32:48.973  	common/iavf:	not in enabled drivers build config
00:32:48.973  	common/idpf:	not in enabled drivers build config
00:32:48.973  	common/mvep:	not in enabled drivers build config
00:32:48.973  	common/octeontx:	not in enabled drivers build config
00:32:48.973  	bus/auxiliary:	not in enabled drivers build config
00:32:48.973  	bus/cdx:	not in enabled drivers build config
00:32:48.973  	bus/dpaa:	not in enabled drivers build config
00:32:48.973  	bus/fslmc:	not in enabled drivers build config
00:32:48.973  	bus/ifpga:	not in enabled drivers build config
00:32:48.973  	bus/platform:	not in enabled drivers build config
00:32:48.973  	bus/vmbus:	not in enabled drivers build config
00:32:48.973  	common/cnxk:	not in enabled drivers build config
00:32:48.973  	common/mlx5:	not in enabled drivers build config
00:32:48.973  	common/nfp:	not in enabled drivers build config
00:32:48.973  	common/qat:	not in enabled drivers build config
00:32:48.973  	common/sfc_efx:	not in enabled drivers build config
00:32:48.973  	mempool/bucket:	not in enabled drivers build config
00:32:48.973  	mempool/cnxk:	not in enabled drivers build config
00:32:48.973  	mempool/dpaa:	not in enabled drivers build config
00:32:48.973  	mempool/dpaa2:	not in enabled drivers build config
00:32:48.973  	mempool/octeontx:	not in enabled drivers build config
00:32:48.973  	mempool/stack:	not in enabled drivers build config
00:32:48.973  	dma/cnxk:	not in enabled drivers build config
00:32:48.973  	dma/dpaa:	not in enabled drivers build config
00:32:48.973  	dma/dpaa2:	not in enabled drivers build config
00:32:48.973  	dma/hisilicon:	not in enabled drivers build config
00:32:48.973  	dma/idxd:	not in enabled drivers build config
00:32:48.973  	dma/ioat:	not in enabled drivers build config
00:32:48.973  	dma/skeleton:	not in enabled drivers build config
00:32:48.973  	net/af_packet:	not in enabled drivers build config
00:32:48.973  	net/af_xdp:	not in enabled drivers build config
00:32:48.973  	net/ark:	not in enabled drivers build config
00:32:48.973  	net/atlantic:	not in enabled drivers build config
00:32:48.973  	net/avp:	not in enabled drivers build config
00:32:48.973  	net/axgbe:	not in enabled drivers build config
00:32:48.973  	net/bnx2x:	not in enabled drivers build config
00:32:48.973  	net/bnxt:	not in enabled drivers build config
00:32:48.973  	net/bonding:	not in enabled drivers build config
00:32:48.973  	net/cnxk:	not in enabled drivers build config
00:32:48.973  	net/cpfl:	not in enabled drivers build config
00:32:48.973  	net/cxgbe:	not in enabled drivers build config
00:32:48.973  	net/dpaa:	not in enabled drivers build config
00:32:48.973  	net/dpaa2:	not in enabled drivers build config
00:32:48.973  	net/e1000:	not in enabled drivers build config
00:32:48.973  	net/ena:	not in enabled drivers build config
00:32:48.973  	net/enetc:	not in enabled drivers build config
00:32:48.973  	net/enetfec:	not in enabled drivers build config
00:32:48.973  	net/enic:	not in enabled drivers build config
00:32:48.973  	net/failsafe:	not in enabled drivers build config
00:32:48.973  	net/fm10k:	not in enabled drivers build config
00:32:48.973  	net/gve:	not in enabled drivers build config
00:32:48.973  	net/hinic:	not in enabled drivers build config
00:32:48.973  	net/hns3:	not in enabled drivers build config
00:32:48.973  	net/i40e:	not in enabled drivers build config
00:32:48.973  	net/iavf:	not in enabled drivers build config
00:32:48.973  	net/ice:	not in enabled drivers build config
00:32:48.973  	net/idpf:	not in enabled drivers build config
00:32:48.973  	net/igc:	not in enabled drivers build config
00:32:48.973  	net/ionic:	not in enabled drivers build config
00:32:48.973  	net/ipn3ke:	not in enabled drivers build config
00:32:48.973  	net/ixgbe:	not in enabled drivers build config
00:32:48.973  	net/mana:	not in enabled drivers build config
00:32:48.973  	net/memif:	not in enabled drivers build config
00:32:48.973  	net/mlx4:	not in enabled drivers build config
00:32:48.973  	net/mlx5:	not in enabled drivers build config
00:32:48.973  	net/mvneta:	not in enabled drivers build config
00:32:48.973  	net/mvpp2:	not in enabled drivers build config
00:32:48.973  	net/netvsc:	not in enabled drivers build config
00:32:48.973  	net/nfb:	not in enabled drivers build config
00:32:48.973  	net/nfp:	not in enabled drivers build config
00:32:48.973  	net/ngbe:	not in enabled drivers build config
00:32:48.973  	net/null:	not in enabled drivers build config
00:32:48.973  	net/octeontx:	not in enabled drivers build config
00:32:48.973  	net/octeon_ep:	not in enabled drivers build config
00:32:48.973  	net/pcap:	not in enabled drivers build config
00:32:48.973  	net/pfe:	not in enabled drivers build config
00:32:48.973  	net/qede:	not in enabled drivers build config
00:32:48.973  	net/ring:	not in enabled drivers build config
00:32:48.973  	net/sfc:	not in enabled drivers build config
00:32:48.973  	net/softnic:	not in enabled drivers build config
00:32:48.973  	net/tap:	not in enabled drivers build config
00:32:48.973  	net/thunderx:	not in enabled drivers build config
00:32:48.973  	net/txgbe:	not in enabled drivers build config
00:32:48.973  	net/vdev_netvsc:	not in enabled drivers build config
00:32:48.973  	net/vhost:	not in enabled drivers build config
00:32:48.973  	net/virtio:	not in enabled drivers build config
00:32:48.973  	net/vmxnet3:	not in enabled drivers build config
00:32:48.973  	raw/*:	missing internal dependency, "rawdev"
00:32:48.973  	crypto/armv8:	not in enabled drivers build config
00:32:48.973  	crypto/bcmfs:	not in enabled drivers build config
00:32:48.973  	crypto/caam_jr:	not in enabled drivers build config
00:32:48.973  	crypto/ccp:	not in enabled drivers build config
00:32:48.973  	crypto/cnxk:	not in enabled drivers build config
00:32:48.973  	crypto/dpaa_sec:	not in enabled drivers build config
00:32:48.973  	crypto/dpaa2_sec:	not in enabled drivers build config
00:32:48.973  	crypto/ipsec_mb:	not in enabled drivers build config
00:32:48.973  	crypto/mlx5:	not in enabled drivers build config
00:32:48.973  	crypto/mvsam:	not in enabled drivers build config
00:32:48.973  	crypto/nitrox:	not in enabled drivers build config
00:32:48.973  	crypto/null:	not in enabled drivers build config
00:32:48.973  	crypto/octeontx:	not in enabled drivers build config
00:32:48.973  	crypto/openssl:	not in enabled drivers build config
00:32:48.973  	crypto/scheduler:	not in enabled drivers build config
00:32:48.973  	crypto/uadk:	not in enabled drivers build config
00:32:48.973  	crypto/virtio:	not in enabled drivers build config
00:32:48.973  	compress/isal:	not in enabled drivers build config
00:32:48.973  	compress/mlx5:	not in enabled drivers build config
00:32:48.973  	compress/octeontx:	not in enabled drivers build config
00:32:48.973  	compress/zlib:	not in enabled drivers build config
00:32:48.973  	regex/*:	missing internal dependency, "regexdev"
00:32:48.974  	ml/*:	missing internal dependency, "mldev"
00:32:48.974  	vdpa/ifc:	not in enabled drivers build config
00:32:48.974  	vdpa/mlx5:	not in enabled drivers build config
00:32:48.974  	vdpa/nfp:	not in enabled drivers build config
00:32:48.974  	vdpa/sfc:	not in enabled drivers build config
00:32:48.974  	event/*:	missing internal dependency, "eventdev"
00:32:48.974  	baseband/*:	missing internal dependency, "bbdev"
00:32:48.974  	gpu/*:	missing internal dependency, "gpudev"
00:32:48.974  	
00:32:48.974  
00:32:49.231  Build targets in project: 85
00:32:49.231  
00:32:49.231  DPDK 23.11.0
00:32:49.231  
00:32:49.231    User defined options
00:32:49.231      default_library    : static
00:32:49.231      libdir             : lib
00:32:49.231      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:32:49.231      b_lto              : true
00:32:49.231      b_sanitize         : address
00:32:49.231      c_args             : -fPIC -Werror  -Wno-stringop-overflow -fcommon
00:32:49.231      c_link_args        : 
00:32:49.231      cpu_instruction_set: native
00:32:49.231      disable_apps       : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf
00:32:49.231      disable_libs       : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev
00:32:49.231      enable_docs        : false
00:32:49.231      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring
00:32:49.231      enable_kmods       : false
00:32:49.231      tests              : false
00:32:49.231  
00:32:49.231  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:32:49.799  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:32:49.799  [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:32:49.799  [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:32:49.799  [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:32:49.799  [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:32:49.799  [5/265] Linking static target lib/librte_kvargs.a
00:32:49.799  [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:32:49.799  [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:32:50.058  [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o
00:32:50.058  [9/265] Linking static target lib/librte_log.a
00:32:50.058  [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:32:50.058  [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:32:50.058  [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:32:50.058  [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:32:50.317  [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:32:50.317  [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:32:50.317  [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:32:50.317  [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:32:50.575  [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:32:50.575  [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:32:50.575  [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:32:50.575  [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:32:50.575  [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:32:50.834  [23/265] Linking target lib/librte_log.so.24.0
00:32:50.834  [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:32:50.834  [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:32:50.834  [26/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:32:50.834  [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:32:51.093  [28/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:32:51.093  [29/265] Linking static target lib/librte_telemetry.a
00:32:51.093  [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:32:51.093  [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:32:51.093  [32/265] Linking target lib/librte_kvargs.so.24.0
00:32:51.093  [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:32:51.093  [34/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:32:51.093  [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:32:51.093  [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:32:51.093  [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:32:51.351  [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:32:51.351  [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:32:51.351  [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:32:51.351  [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:32:51.351  [42/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:32:51.610  [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:32:51.610  [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:32:51.610  [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:32:51.610  [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:32:51.869  [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:32:51.869  [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:32:51.869  [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:32:51.869  [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:32:52.129  [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:32:52.129  [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:32:52.129  [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:32:52.129  [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:32:52.129  [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:32:52.129  [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:32:52.129  [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:32:52.129  [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:32:52.129  [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:32:52.388  [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:32:52.388  [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:32:52.388  [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:32:52.388  [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:32:52.388  [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:32:52.388  [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:32:52.647  [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:32:52.647  [67/265] Linking target lib/librte_telemetry.so.24.0
00:32:52.647  [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:32:52.647  [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:32:52.647  [70/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:32:52.647  [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:32:52.647  [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:32:52.647  [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:32:52.647  [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:32:52.905  [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:32:52.905  [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:32:52.905  [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:32:52.905  [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:32:53.164  [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:32:53.164  [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:32:53.164  [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:32:53.164  [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:32:53.164  [83/265] Linking static target lib/librte_ring.a
00:32:53.423  [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:32:53.423  [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:32:53.423  [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:32:53.423  [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:32:53.423  [88/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:32:53.682  [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:32:53.682  [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:32:53.682  [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:32:53.682  [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:32:53.682  [93/265] Linking static target lib/librte_eal.a
00:32:53.941  [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:32:53.941  [95/265] Linking static target lib/librte_mempool.a
00:32:53.941  [96/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:32:53.941  [97/265] Linking static target lib/net/libnet_crc_avx512_lib.a
00:32:53.941  [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:32:53.941  [99/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:32:53.941  [100/265] Linking static target lib/librte_rcu.a
00:32:53.941  [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:32:54.199  [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:32:54.199  [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:32:54.199  [104/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:32:54.458  [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:32:54.458  [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:32:54.458  [107/265] Linking static target lib/librte_net.a
00:32:54.458  [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:32:54.458  [109/265] Linking static target lib/librte_meter.a
00:32:54.458  [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:32:54.458  [111/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:32:54.458  [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:32:54.717  [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:32:54.717  [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:32:54.717  [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:32:54.717  [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:32:54.974  [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:32:55.233  [118/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:32:55.233  [119/265] Linking static target lib/librte_mbuf.a
00:32:55.233  [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:32:55.492  [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:32:55.492  [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:32:55.751  [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:32:55.751  [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:32:55.751  [125/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:32:55.751  [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:32:55.751  [127/265] Linking static target lib/librte_pci.a
00:32:55.751  [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:32:55.751  [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:32:55.751  [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:32:56.009  [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:32:56.009  [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:32:56.009  [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:32:56.009  [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:32:56.009  [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:32:56.009  [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:32:56.009  [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:32:56.009  [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:32:56.268  [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:32:56.268  [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:32:56.268  [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:32:56.268  [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:32:56.268  [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:32:56.527  [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:32:56.527  [145/265] Linking static target lib/librte_cmdline.a
00:32:56.527  [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:32:56.786  [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:32:56.786  [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:32:57.045  [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:32:57.045  [150/265] Linking static target lib/librte_timer.a
00:32:57.045  [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:32:57.045  [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:32:57.045  [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:32:57.045  [154/265] Linking static target lib/librte_compressdev.a
00:32:57.304  [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:32:57.304  [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:32:57.304  [157/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:32:57.304  [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:32:57.565  [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:32:57.565  [160/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:32:57.565  [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:32:57.565  [162/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:32:57.565  [163/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:32:58.160  [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:32:58.160  [165/265] Linking static target lib/librte_dmadev.a
00:32:58.160  [166/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:32:58.160  [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:32:58.160  [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:32:58.160  [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:32:58.419  [170/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:32:58.419  [171/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:32:58.419  [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:32:58.419  [173/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:32:58.678  [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:32:58.678  [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:32:58.678  [176/265] Linking static target lib/librte_power.a
00:32:58.937  [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:32:58.937  [178/265] Linking static target lib/librte_reorder.a
00:32:59.196  [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:32:59.196  [180/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:32:59.196  [181/265] Linking static target lib/librte_security.a
00:32:59.196  [182/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:32:59.455  [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:32:59.455  [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:32:59.455  [185/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:32:59.455  [186/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:32:59.455  [187/265] Linking static target lib/librte_ethdev.a
00:32:59.455  [188/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:32:59.714  [189/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:32:59.714  [190/265] Linking static target lib/librte_cryptodev.a
00:32:59.973  [191/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:32:59.973  [192/265] Linking static target lib/librte_hash.a
00:32:59.973  [193/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:33:00.232  [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:33:00.494  [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:33:00.494  [196/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:33:00.494  [197/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:33:00.754  [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:33:00.754  [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:33:00.754  [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:33:00.754  [201/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:33:01.013  [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:33:01.013  [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:33:01.272  [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:33:01.531  [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:33:01.531  [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a
00:33:01.532  [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:33:01.532  [208/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:33:01.532  [209/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:33:01.532  [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:33:01.532  [211/265] Linking static target drivers/librte_bus_vdev.a
00:33:01.532  [212/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:33:01.532  [213/265] Linking static target drivers/libtmp_rte_bus_pci.a
00:33:01.791  [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:33:01.791  [215/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:33:01.791  [216/265] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:33:01.791  [217/265] Linking static target drivers/libtmp_rte_mempool_ring.a
00:33:01.791  [218/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:33:01.791  [219/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:33:01.791  [220/265] Linking static target drivers/librte_bus_pci.a
00:33:02.050  [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:33:02.050  [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:33:02.050  [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:33:02.050  [224/265] Linking static target drivers/librte_mempool_ring.a
00:33:02.309  [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:33:04.844  [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:33:11.428  [227/265] Linking target lib/librte_eal.so.24.0
00:33:11.428  lto-wrapper: warning: using serial compilation of 5 LTRANS jobs
00:33:11.428  [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:33:11.428  [229/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:33:11.428  [230/265] Linking target lib/librte_meter.so.24.0
00:33:11.428  [231/265] Linking target lib/librte_pci.so.24.0
00:33:11.428  [232/265] Linking target lib/librte_ring.so.24.0
00:33:11.428  [233/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:33:11.428  [234/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:33:11.428  [235/265] Linking target drivers/librte_bus_vdev.so.24.0
00:33:11.428  [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:33:11.428  [237/265] Linking target lib/librte_timer.so.24.0
00:33:11.428  [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:33:11.428  [239/265] Linking target lib/librte_dmadev.so.24.0
00:33:11.428  [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:33:11.428  [241/265] Linking target lib/librte_rcu.so.24.0
00:33:11.687  [242/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:33:11.687  [243/265] Linking target lib/librte_mempool.so.24.0
00:33:11.687  [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:33:11.946  [245/265] Linking target drivers/librte_bus_pci.so.24.0
00:33:12.205  [246/265] Linking target drivers/librte_mempool_ring.so.24.0
00:33:13.582  [247/265] Linking target lib/librte_mbuf.so.24.0
00:33:13.582  [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:33:14.150  [249/265] Linking target lib/librte_reorder.so.24.0
00:33:14.150  [250/265] Linking target lib/librte_compressdev.so.24.0
00:33:14.409  [251/265] Linking target lib/librte_net.so.24.0
00:33:14.668  [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:33:16.045  [253/265] Linking target lib/librte_cmdline.so.24.0
00:33:16.045  [254/265] Linking target lib/librte_cryptodev.so.24.0
00:33:16.045  [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:33:16.305  [256/265] Linking target lib/librte_security.so.24.0
00:33:18.840  [257/265] Linking target lib/librte_hash.so.24.0
00:33:18.840  [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:33:25.408  [259/265] Linking target lib/librte_ethdev.so.24.0
00:33:25.408  lto-wrapper: warning: using serial compilation of 6 LTRANS jobs
00:33:25.408  [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:33:27.311  [261/265] Linking target lib/librte_power.so.24.0
00:33:31.501  [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:33:31.501  [263/265] Linking static target lib/librte_vhost.a
00:33:33.457  [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:34:20.137  [265/265] Linking target lib/librte_vhost.so.24.0
00:34:20.137  lto-wrapper: warning: using serial compilation of 8 LTRANS jobs
00:34:20.137  INFO: autodetecting backend as ninja
00:34:20.137  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:34:20.137    CC lib/ut/ut.o
00:34:20.137    CC lib/log/log_flags.o
00:34:20.137    CC lib/log/log.o
00:34:20.137    CC lib/log/log_deprecated.o
00:34:20.137    CC lib/ut_mock/mock.o
00:34:20.137    LIB libspdk_ut_mock.a
00:34:20.137    LIB libspdk_ut.a
00:34:20.137    LIB libspdk_log.a
00:34:20.137    CC lib/util/base64.o
00:34:20.137    CC lib/util/bit_array.o
00:34:20.137    CC lib/util/cpuset.o
00:34:20.137    CC lib/util/crc32.o
00:34:20.137    CC lib/util/crc16.o
00:34:20.137    CC lib/util/crc32c.o
00:34:20.137    CC lib/dma/dma.o
00:34:20.137    CC lib/ioat/ioat.o
00:34:20.137    CXX lib/trace_parser/trace.o
00:34:20.137    CC lib/vfio_user/host/vfio_user_pci.o
00:34:20.137    CC lib/util/crc32_ieee.o
00:34:20.137    CC lib/util/crc64.o
00:34:20.137    CC lib/util/dif.o
00:34:20.137    LIB libspdk_dma.a
00:34:20.137    CC lib/util/fd.o
00:34:20.137    CC lib/util/file.o
00:34:20.137    CC lib/util/hexlify.o
00:34:20.137    LIB libspdk_ioat.a
00:34:20.137    CC lib/util/iov.o
00:34:20.137    CC lib/util/math.o
00:34:20.137    CC lib/vfio_user/host/vfio_user.o
00:34:20.137    CC lib/util/pipe.o
00:34:20.137    CC lib/util/strerror_tls.o
00:34:20.137    CC lib/util/string.o
00:34:20.137    CC lib/util/uuid.o
00:34:20.137    CC lib/util/fd_group.o
00:34:20.137    CC lib/util/xor.o
00:34:20.137    CC lib/util/zipf.o
00:34:20.137    LIB libspdk_vfio_user.a
00:34:20.137    LIB libspdk_util.a
00:34:20.137    LIB libspdk_trace_parser.a
00:34:20.137    CC lib/idxd/idxd.o
00:34:20.137    CC lib/idxd/idxd_user.o
00:34:20.137    CC lib/json/json_util.o
00:34:20.137    CC lib/json/json_parse.o
00:34:20.137    CC lib/json/json_write.o
00:34:20.137    CC lib/conf/conf.o
00:34:20.137    CC lib/env_dpdk/env.o
00:34:20.137    CC lib/env_dpdk/memory.o
00:34:20.137    CC lib/vmd/vmd.o
00:34:20.137    CC lib/rdma/common.o
00:34:20.137    CC lib/rdma/rdma_verbs.o
00:34:20.137    CC lib/vmd/led.o
00:34:20.137    LIB libspdk_conf.a
00:34:20.137    CC lib/env_dpdk/pci.o
00:34:20.137    CC lib/env_dpdk/init.o
00:34:20.137    LIB libspdk_json.a
00:34:20.137    CC lib/env_dpdk/threads.o
00:34:20.137    LIB libspdk_idxd.a
00:34:20.137    CC lib/env_dpdk/pci_ioat.o
00:34:20.137    CC lib/env_dpdk/pci_virtio.o
00:34:20.137    LIB libspdk_vmd.a
00:34:20.137    LIB libspdk_rdma.a
00:34:20.137    CC lib/env_dpdk/pci_vmd.o
00:34:20.137    CC lib/env_dpdk/pci_idxd.o
00:34:20.137    CC lib/env_dpdk/pci_event.o
00:34:20.137    CC lib/env_dpdk/sigbus_handler.o
00:34:20.137    CC lib/jsonrpc/jsonrpc_server.o
00:34:20.137    CC lib/env_dpdk/pci_dpdk.o
00:34:20.137    CC lib/env_dpdk/pci_dpdk_2207.o
00:34:20.137    CC lib/env_dpdk/pci_dpdk_2211.o
00:34:20.137    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:34:20.137    CC lib/jsonrpc/jsonrpc_client.o
00:34:20.137    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:34:20.137    LIB libspdk_jsonrpc.a
00:34:20.137    CC lib/rpc/rpc.o
00:34:20.137    LIB libspdk_env_dpdk.a
00:34:20.137    LIB libspdk_rpc.a
00:34:20.137    CC lib/trace/trace.o
00:34:20.137    CC lib/trace/trace_flags.o
00:34:20.137    CC lib/trace/trace_rpc.o
00:34:20.137    CC lib/sock/sock.o
00:34:20.137    CC lib/sock/sock_rpc.o
00:34:20.137    CC lib/notify/notify.o
00:34:20.137    CC lib/notify/notify_rpc.o
00:34:20.137    LIB libspdk_notify.a
00:34:20.137    LIB libspdk_trace.a
00:34:20.137    LIB libspdk_sock.a
00:34:20.137    CC lib/thread/thread.o
00:34:20.137    CC lib/thread/iobuf.o
00:34:20.137    CC lib/nvme/nvme_ctrlr_cmd.o
00:34:20.137    CC lib/nvme/nvme_ctrlr.o
00:34:20.137    CC lib/nvme/nvme_ns_cmd.o
00:34:20.137    CC lib/nvme/nvme_fabric.o
00:34:20.137    CC lib/nvme/nvme_ns.o
00:34:20.137    CC lib/nvme/nvme_qpair.o
00:34:20.137    CC lib/nvme/nvme_pcie.o
00:34:20.137    CC lib/nvme/nvme_pcie_common.o
00:34:20.137    CC lib/nvme/nvme.o
00:34:20.137    LIB libspdk_thread.a
00:34:20.137    CC lib/nvme/nvme_quirks.o
00:34:20.137    CC lib/nvme/nvme_transport.o
00:34:20.137    CC lib/nvme/nvme_discovery.o
00:34:20.137    CC lib/accel/accel.o
00:34:20.137    CC lib/accel/accel_rpc.o
00:34:20.137    CC lib/blob/blobstore.o
00:34:20.137    CC lib/accel/accel_sw.o
00:34:20.137    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:34:20.137    CC lib/init/json_config.o
00:34:20.137    CC lib/virtio/virtio.o
00:34:20.137    CC lib/virtio/virtio_vhost_user.o
00:34:20.137    CC lib/init/subsystem.o
00:34:20.137    CC lib/init/subsystem_rpc.o
00:34:20.137    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:34:20.137    CC lib/nvme/nvme_tcp.o
00:34:20.137    CC lib/nvme/nvme_opal.o
00:34:20.137    CC lib/init/rpc.o
00:34:20.137    LIB libspdk_accel.a
00:34:20.137    CC lib/nvme/nvme_io_msg.o
00:34:20.137    CC lib/nvme/nvme_poll_group.o
00:34:20.137    CC lib/virtio/virtio_vfio_user.o
00:34:20.137    CC lib/virtio/virtio_pci.o
00:34:20.137    LIB libspdk_init.a
00:34:20.137    CC lib/nvme/nvme_zns.o
00:34:20.137    CC lib/bdev/bdev.o
00:34:20.137    CC lib/event/app.o
00:34:20.137    LIB libspdk_virtio.a
00:34:20.137    CC lib/event/reactor.o
00:34:20.137    CC lib/event/log_rpc.o
00:34:20.137    CC lib/event/app_rpc.o
00:34:20.137    CC lib/bdev/bdev_rpc.o
00:34:20.137    CC lib/bdev/bdev_zone.o
00:34:20.137    CC lib/event/scheduler_static.o
00:34:20.137    CC lib/bdev/part.o
00:34:20.137    CC lib/bdev/scsi_nvme.o
00:34:20.137    CC lib/nvme/nvme_cuse.o
00:34:20.137    LIB libspdk_event.a
00:34:20.137    CC lib/nvme/nvme_vfio_user.o
00:34:20.137    CC lib/nvme/nvme_rdma.o
00:34:20.137    CC lib/blob/request.o
00:34:20.137    CC lib/blob/zeroes.o
00:34:20.137    CC lib/blob/blob_bs_dev.o
00:34:20.137    LIB libspdk_blob.a
00:34:20.137    CC lib/lvol/lvol.o
00:34:20.137    CC lib/blobfs/tree.o
00:34:20.137    CC lib/blobfs/blobfs.o
00:34:20.137    LIB libspdk_blobfs.a
00:34:20.137    LIB libspdk_lvol.a
00:34:20.137    LIB libspdk_bdev.a
00:34:20.137    LIB libspdk_nvme.a
00:34:20.137    CC lib/nbd/nbd.o
00:34:20.137    CC lib/nbd/nbd_rpc.o
00:34:20.137    CC lib/scsi/dev.o
00:34:20.137    CC lib/scsi/lun.o
00:34:20.137    CC lib/scsi/port.o
00:34:20.137    CC lib/scsi/scsi_bdev.o
00:34:20.137    CC lib/scsi/scsi_pr.o
00:34:20.137    CC lib/scsi/scsi.o
00:34:20.137    CC lib/ftl/ftl_core.o
00:34:20.137    CC lib/nvmf/ctrlr.o
00:34:20.137    CC lib/ftl/ftl_init.o
00:34:20.137    CC lib/ftl/ftl_layout.o
00:34:20.137    CC lib/ftl/ftl_debug.o
00:34:20.137    CC lib/ftl/ftl_io.o
00:34:20.137    CC lib/ftl/ftl_sb.o
00:34:20.137    CC lib/ftl/ftl_l2p.o
00:34:20.137    LIB libspdk_nbd.a
00:34:20.137    CC lib/nvmf/ctrlr_discovery.o
00:34:20.137    CC lib/nvmf/ctrlr_bdev.o
00:34:20.137    CC lib/scsi/scsi_rpc.o
00:34:20.137    CC lib/scsi/task.o
00:34:20.137    CC lib/nvmf/subsystem.o
00:34:20.137    CC lib/ftl/ftl_l2p_flat.o
00:34:20.137    CC lib/nvmf/nvmf.o
00:34:20.137    CC lib/nvmf/nvmf_rpc.o
00:34:20.137    CC lib/nvmf/transport.o
00:34:20.137    CC lib/nvmf/tcp.o
00:34:20.137    CC lib/ftl/ftl_nv_cache.o
00:34:20.137    LIB libspdk_scsi.a
00:34:20.137    CC lib/ftl/ftl_band.o
00:34:20.137    CC lib/nvmf/rdma.o
00:34:20.137    CC lib/iscsi/conn.o
00:34:20.137    CC lib/iscsi/init_grp.o
00:34:20.137    CC lib/iscsi/iscsi.o
00:34:20.137    CC lib/iscsi/md5.o
00:34:20.137    CC lib/iscsi/param.o
00:34:20.137    CC lib/iscsi/portal_grp.o
00:34:20.137    CC lib/ftl/ftl_band_ops.o
00:34:20.137    CC lib/ftl/ftl_writer.o
00:34:20.137    CC lib/ftl/ftl_rq.o
00:34:20.137    CC lib/ftl/ftl_reloc.o
00:34:20.396    CC lib/ftl/ftl_l2p_cache.o
00:34:20.396    CC lib/ftl/ftl_p2l.o
00:34:20.396    CC lib/ftl/mngt/ftl_mngt.o
00:34:20.396    CC lib/iscsi/tgt_node.o
00:34:20.396    CC lib/iscsi/iscsi_subsystem.o
00:34:20.396    CC lib/iscsi/iscsi_rpc.o
00:34:20.396    CC lib/vhost/vhost.o
00:34:20.396    CC lib/vhost/vhost_rpc.o
00:34:20.396    CC lib/vhost/vhost_scsi.o
00:34:20.396    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:34:20.655    LIB libspdk_nvmf.a
00:34:20.655    CC lib/vhost/vhost_blk.o
00:34:20.655    CC lib/vhost/rte_vhost_user.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_startup.o
00:34:20.655    CC lib/iscsi/task.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_md.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_misc.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:34:20.655    LIB libspdk_iscsi.a
00:34:20.655    CC lib/ftl/mngt/ftl_mngt_band.o
00:34:20.914    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:34:20.914    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:34:20.914    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:34:20.914    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:34:20.914    CC lib/ftl/utils/ftl_conf.o
00:34:20.914    CC lib/ftl/utils/ftl_md.o
00:34:20.914    CC lib/ftl/utils/ftl_mempool.o
00:34:20.914    CC lib/ftl/utils/ftl_bitmap.o
00:34:20.914    CC lib/ftl/utils/ftl_property.o
00:34:20.914    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:34:20.914    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:34:21.173    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:34:21.173    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:34:21.173    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:34:21.173    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:34:21.173    CC lib/ftl/upgrade/ftl_sb_v3.o
00:34:21.173    CC lib/ftl/upgrade/ftl_sb_v5.o
00:34:21.173    CC lib/ftl/nvc/ftl_nvc_dev.o
00:34:21.173    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:34:21.173    CC lib/ftl/base/ftl_base_dev.o
00:34:21.173    LIB libspdk_vhost.a
00:34:21.173    CC lib/ftl/base/ftl_base_bdev.o
00:34:21.431    LIB libspdk_ftl.a
00:34:21.431    CC module/env_dpdk/env_dpdk_rpc.o
00:34:21.431    CC module/accel/error/accel_error.o
00:34:21.431    CC module/accel/iaa/accel_iaa.o
00:34:21.690    CC module/accel/dsa/accel_dsa.o
00:34:21.690    CC module/sock/posix/posix.o
00:34:21.690    CC module/scheduler/gscheduler/gscheduler.o
00:34:21.690    CC module/accel/ioat/accel_ioat.o
00:34:21.690    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:34:21.690    CC module/scheduler/dynamic/scheduler_dynamic.o
00:34:21.690    CC module/blob/bdev/blob_bdev.o
00:34:21.690    LIB libspdk_env_dpdk_rpc.a
00:34:21.690    CC module/accel/dsa/accel_dsa_rpc.o
00:34:21.690    LIB libspdk_scheduler_gscheduler.a
00:34:21.690    LIB libspdk_scheduler_dpdk_governor.a
00:34:21.690    CC module/accel/error/accel_error_rpc.o
00:34:21.690    CC module/accel/ioat/accel_ioat_rpc.o
00:34:21.690    CC module/accel/iaa/accel_iaa_rpc.o
00:34:21.690    LIB libspdk_blob_bdev.a
00:34:21.690    LIB libspdk_scheduler_dynamic.a
00:34:21.690    LIB libspdk_accel_dsa.a
00:34:21.949    CC module/blobfs/bdev/blobfs_bdev.o
00:34:21.949    CC module/bdev/malloc/bdev_malloc.o
00:34:21.949    CC module/bdev/error/vbdev_error.o
00:34:21.949    LIB libspdk_accel_error.a
00:34:21.949    CC module/bdev/delay/vbdev_delay.o
00:34:21.949    LIB libspdk_accel_iaa.a
00:34:21.949    CC module/bdev/gpt/gpt.o
00:34:21.949    LIB libspdk_accel_ioat.a
00:34:21.949    CC module/bdev/lvol/vbdev_lvol.o
00:34:21.949    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:34:21.949    CC module/bdev/gpt/vbdev_gpt.o
00:34:21.949    CC module/bdev/error/vbdev_error_rpc.o
00:34:21.949    LIB libspdk_sock_posix.a
00:34:21.949    CC module/bdev/delay/vbdev_delay_rpc.o
00:34:21.949    CC module/bdev/malloc/bdev_malloc_rpc.o
00:34:21.949    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:34:21.949    LIB libspdk_bdev_error.a
00:34:21.949    LIB libspdk_bdev_gpt.a
00:34:21.949    LIB libspdk_bdev_delay.a
00:34:22.208    CC module/bdev/null/bdev_null.o
00:34:22.208    CC module/bdev/null/bdev_null_rpc.o
00:34:22.208    CC module/bdev/nvme/bdev_nvme.o
00:34:22.208    LIB libspdk_blobfs_bdev.a
00:34:22.208    LIB libspdk_bdev_malloc.a
00:34:22.208    CC module/bdev/passthru/vbdev_passthru.o
00:34:22.208    LIB libspdk_bdev_lvol.a
00:34:22.208    CC module/bdev/raid/bdev_raid.o
00:34:22.208    CC module/bdev/split/vbdev_split.o
00:34:22.208    CC module/bdev/split/vbdev_split_rpc.o
00:34:22.208    CC module/bdev/zone_block/vbdev_zone_block.o
00:34:22.208    CC module/bdev/ftl/bdev_ftl.o
00:34:22.208    CC module/bdev/aio/bdev_aio.o
00:34:22.208    CC module/bdev/ftl/bdev_ftl_rpc.o
00:34:22.208    LIB libspdk_bdev_null.a
00:34:22.208    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:34:22.208    CC module/bdev/raid/bdev_raid_rpc.o
00:34:22.208    LIB libspdk_bdev_split.a
00:34:22.208    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:34:22.467    CC module/bdev/raid/bdev_raid_sb.o
00:34:22.467    CC module/bdev/iscsi/bdev_iscsi.o
00:34:22.467    LIB libspdk_bdev_ftl.a
00:34:22.467    CC module/bdev/aio/bdev_aio_rpc.o
00:34:22.467    LIB libspdk_bdev_passthru.a
00:34:22.467    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:34:22.467    CC module/bdev/raid/raid0.o
00:34:22.467    CC module/bdev/raid/raid1.o
00:34:22.467    CC module/bdev/virtio/bdev_virtio_scsi.o
00:34:22.467    LIB libspdk_bdev_zone_block.a
00:34:22.467    CC module/bdev/raid/concat.o
00:34:22.467    CC module/bdev/nvme/bdev_nvme_rpc.o
00:34:22.467    LIB libspdk_bdev_aio.a
00:34:22.467    CC module/bdev/nvme/nvme_rpc.o
00:34:22.467    CC module/bdev/nvme/bdev_mdns_client.o
00:34:22.467    CC module/bdev/nvme/vbdev_opal.o
00:34:22.467    CC module/bdev/nvme/vbdev_opal_rpc.o
00:34:22.467    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:34:22.726    LIB libspdk_bdev_iscsi.a
00:34:22.726    CC module/bdev/raid/raid5f.o
00:34:22.726    CC module/bdev/virtio/bdev_virtio_blk.o
00:34:22.726    CC module/bdev/virtio/bdev_virtio_rpc.o
00:34:22.726    LIB libspdk_bdev_virtio.a
00:34:22.985    LIB libspdk_bdev_raid.a
00:34:22.985    LIB libspdk_bdev_nvme.a
00:34:23.244    CC module/event/subsystems/iobuf/iobuf.o
00:34:23.244    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:34:23.244    CC module/event/subsystems/sock/sock.o
00:34:23.244    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:34:23.244    CC module/event/subsystems/vmd/vmd.o
00:34:23.244    CC module/event/subsystems/vmd/vmd_rpc.o
00:34:23.244    CC module/event/subsystems/scheduler/scheduler.o
00:34:23.244    LIB libspdk_event_vhost_blk.a
00:34:23.244    LIB libspdk_event_sock.a
00:34:23.244    LIB libspdk_event_vmd.a
00:34:23.244    LIB libspdk_event_scheduler.a
00:34:23.244    LIB libspdk_event_iobuf.a
00:34:23.503    CC module/event/subsystems/accel/accel.o
00:34:23.503    LIB libspdk_event_accel.a
00:34:23.762    CC module/event/subsystems/bdev/bdev.o
00:34:23.762    LIB libspdk_event_bdev.a
00:34:24.020    CC module/event/subsystems/nbd/nbd.o
00:34:24.020    CC module/event/subsystems/scsi/scsi.o
00:34:24.020    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:34:24.020    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:34:24.020    LIB libspdk_event_nbd.a
00:34:24.020    LIB libspdk_event_scsi.a
00:34:24.278    LIB libspdk_event_nvmf.a
00:34:24.278    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:34:24.278    CC module/event/subsystems/iscsi/iscsi.o
00:34:24.278    LIB libspdk_event_vhost_scsi.a
00:34:24.547    LIB libspdk_event_iscsi.a
00:34:24.547    CC app/trace_record/trace_record.o
00:34:24.547    CXX app/trace/trace.o
00:34:24.547    CC app/spdk_lspci/spdk_lspci.o
00:34:24.547    CC app/iscsi_tgt/iscsi_tgt.o
00:34:24.547    CC examples/accel/perf/accel_perf.o
00:34:24.547    CC app/nvmf_tgt/nvmf_main.o
00:34:24.547    CC app/spdk_tgt/spdk_tgt.o
00:34:24.547    CC test/accel/dif/dif.o
00:34:24.807    CC test/bdev/bdevio/bdevio.o
00:34:24.807    CC test/app/bdev_svc/bdev_svc.o
00:34:24.807    LINK spdk_lspci
00:34:24.807    LINK spdk_trace_record
00:34:24.807    LINK iscsi_tgt
00:34:24.807    LINK nvmf_tgt
00:34:24.807    LINK spdk_trace
00:34:24.807    LINK spdk_tgt
00:34:24.807    LINK bdev_svc
00:34:24.807    LINK accel_perf
00:34:25.065    LINK dif
00:34:25.065    LINK bdevio
00:34:31.630    CC examples/bdev/hello_world/hello_bdev.o
00:34:31.888    LINK hello_bdev
00:34:37.158    CC examples/blob/hello_world/hello_blob.o
00:34:38.535    LINK hello_blob
00:35:10.637    CC test/blobfs/mkfs/mkfs.o
00:35:10.637    LINK mkfs
00:35:10.637    CC examples/ioat/perf/perf.o
00:35:10.637    LINK ioat_perf
00:35:57.321    CC examples/ioat/verify/verify.o
00:35:57.321    LINK verify
00:36:43.997    TEST_HEADER include/spdk/config.h
00:36:43.997    CXX test/cpp_headers/accel.o
00:36:43.997    CXX test/cpp_headers/accel_module.o
00:36:43.997    CXX test/cpp_headers/assert.o
00:36:45.374    CXX test/cpp_headers/barrier.o
00:36:46.750    CXX test/cpp_headers/base64.o
00:36:48.126    CXX test/cpp_headers/bdev.o
00:36:48.693    CXX test/cpp_headers/bdev_module.o
00:36:50.069    CXX test/cpp_headers/bdev_zone.o
00:36:51.005    CXX test/cpp_headers/bit_array.o
00:36:51.941    CXX test/cpp_headers/bit_pool.o
00:36:53.318    CXX test/cpp_headers/blob.o
00:36:54.254    CXX test/cpp_headers/blob_bdev.o
00:36:55.631    CXX test/cpp_headers/blobfs.o
00:36:56.209    CXX test/cpp_headers/blobfs_bdev.o
00:36:56.468    CXX test/cpp_headers/conf.o
00:36:57.404    CXX test/cpp_headers/config.o
00:36:57.663    CXX test/cpp_headers/cpuset.o
00:36:58.230    CC test/dma/test_dma/test_dma.o
00:36:58.798    CXX test/cpp_headers/crc16.o
00:36:59.366    LINK test_dma
00:36:59.625    CXX test/cpp_headers/crc32.o
00:37:00.561    CXX test/cpp_headers/crc64.o
00:37:01.497    CXX test/cpp_headers/dif.o
00:37:02.433    CXX test/cpp_headers/dma.o
00:37:02.433    CC test/env/mem_callbacks/mem_callbacks.o
00:37:03.370    CXX test/cpp_headers/endian.o
00:37:04.307    CXX test/cpp_headers/env.o
00:37:05.242    LINK mem_callbacks
00:37:05.242    CXX test/cpp_headers/env_dpdk.o
00:37:05.810    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:37:06.068    CXX test/cpp_headers/event.o
00:37:07.004    CXX test/cpp_headers/fd.o
00:37:07.004    LINK nvme_fuzz
00:37:07.940    CXX test/cpp_headers/fd_group.o
00:37:08.508    CXX test/cpp_headers/file.o
00:37:09.444    CXX test/cpp_headers/ftl.o
00:37:10.011    CXX test/cpp_headers/gpt_spec.o
00:37:10.947    CXX test/cpp_headers/hexlify.o
00:37:11.514    CXX test/cpp_headers/histogram_data.o
00:37:12.451    CXX test/cpp_headers/idxd.o
00:37:13.018    CXX test/cpp_headers/idxd_spec.o
00:37:13.018    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:37:13.954    CXX test/cpp_headers/init.o
00:37:14.892    CXX test/cpp_headers/ioat.o
00:37:15.460    CXX test/cpp_headers/ioat_spec.o
00:37:15.460    CC examples/bdev/bdevperf/bdevperf.o
00:37:15.718    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:37:16.286    CXX test/cpp_headers/iscsi_spec.o
00:37:16.286    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:37:16.853    LINK iscsi_fuzz
00:37:17.112    CXX test/cpp_headers/json.o
00:37:17.680    LINK bdevperf
00:37:17.680    LINK vhost_fuzz
00:37:17.680    CXX test/cpp_headers/jsonrpc.o
00:37:18.616    CXX test/cpp_headers/likely.o
00:37:18.875    CC examples/blob/cli/blobcli.o
00:37:19.442    CXX test/cpp_headers/log.o
00:37:19.704    LINK blobcli
00:37:19.990    CXX test/cpp_headers/lvol.o
00:37:20.568    CXX test/cpp_headers/memory.o
00:37:21.135    CXX test/cpp_headers/mmio.o
00:37:21.703    CXX test/cpp_headers/nbd.o
00:37:21.962    CXX test/cpp_headers/notify.o
00:37:22.896    CXX test/cpp_headers/nvme.o
00:37:23.832    CXX test/cpp_headers/nvme_intel.o
00:37:24.768    CXX test/cpp_headers/nvme_ocssd.o
00:37:25.703    CXX test/cpp_headers/nvme_ocssd_spec.o
00:37:26.271    CC test/env/vtophys/vtophys.o
00:37:26.529    CXX test/cpp_headers/nvme_spec.o
00:37:26.788    LINK vtophys
00:37:27.355    CXX test/cpp_headers/nvme_zns.o
00:37:28.291    CC app/spdk_nvme_perf/perf.o
00:37:28.550    CXX test/cpp_headers/nvmf.o
00:37:29.117    CXX test/cpp_headers/nvmf_cmd.o
00:37:30.053    LINK spdk_nvme_perf
00:37:30.053    CXX test/cpp_headers/nvmf_fc_spec.o
00:37:30.989    CXX test/cpp_headers/nvmf_spec.o
00:37:31.924    CXX test/cpp_headers/nvmf_transport.o
00:37:33.301    CXX test/cpp_headers/opal.o
00:37:34.237    CXX test/cpp_headers/opal_spec.o
00:37:34.804    CC test/event/event_perf/event_perf.o
00:37:35.062    CXX test/cpp_headers/pci_ids.o
00:37:35.629    LINK event_perf
00:37:35.888    CXX test/cpp_headers/pipe.o
00:37:37.264    CXX test/cpp_headers/queue.o
00:37:37.264    CXX test/cpp_headers/reduce.o
00:37:38.641    CXX test/cpp_headers/rpc.o
00:37:40.016    CXX test/cpp_headers/scheduler.o
00:37:40.952    CXX test/cpp_headers/scsi.o
00:37:42.854    CXX test/cpp_headers/scsi_spec.o
00:37:43.421    CXX test/cpp_headers/sock.o
00:37:44.797    CXX test/cpp_headers/stdinc.o
00:37:46.249    CXX test/cpp_headers/string.o
00:37:47.626    CXX test/cpp_headers/thread.o
00:37:48.562    CXX test/cpp_headers/trace.o
00:37:49.497    CXX test/cpp_headers/trace_parser.o
00:37:50.874    CXX test/cpp_headers/tree.o
00:37:50.874    CXX test/cpp_headers/ublk.o
00:37:51.809    CXX test/cpp_headers/util.o
00:37:52.746    CXX test/cpp_headers/uuid.o
00:37:53.682    CXX test/cpp_headers/version.o
00:37:53.682    CC test/event/reactor/reactor.o
00:37:53.941    CC test/event/reactor_perf/reactor_perf.o
00:37:53.941    CXX test/cpp_headers/vfio_user_pci.o
00:37:54.507    LINK reactor
00:37:54.764    LINK reactor_perf
00:37:55.022    CXX test/cpp_headers/vfio_user_spec.o
00:37:55.958    CXX test/cpp_headers/vhost.o
00:37:56.894    CXX test/cpp_headers/vmd.o
00:37:57.829    CXX test/cpp_headers/xor.o
00:37:58.763    CXX test/cpp_headers/zipf.o
00:38:00.666    CC test/lvol/esnap/esnap.o
00:38:01.234    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:38:02.610    LINK env_dpdk_post_init
00:38:02.869    CC test/app/histogram_perf/histogram_perf.o
00:38:03.804    LINK histogram_perf
00:38:16.007    CC test/event/app_repeat/app_repeat.o
00:38:16.007    LINK app_repeat
00:38:16.944    LINK esnap
00:38:31.856    CC app/spdk_nvme_identify/identify.o
00:38:35.144    LINK spdk_nvme_identify
00:38:35.144    CC test/env/memory/memory_ut.o
00:38:35.144    CC test/event/scheduler/scheduler.o
00:38:35.402    CC test/app/jsoncat/jsoncat.o
00:38:35.969    LINK scheduler
00:38:36.228    LINK jsoncat
00:38:38.760    LINK memory_ut
00:38:48.737    CC test/env/pci/pci_ut.o
00:38:48.996    CC test/nvme/aer/aer.o
00:38:48.996    LINK pci_ut
00:38:50.376    LINK aer
00:38:50.376    CC test/nvme/reset/reset.o
00:38:51.312    LINK reset
00:39:03.522    CC test/nvme/sgl/sgl.o
00:39:03.522    LINK sgl
00:39:10.092    CC test/app/stub/stub.o
00:39:10.660    LINK stub
00:39:17.227    CC test/nvme/e2edp/nvme_dp.o
00:39:17.817    CC test/nvme/overhead/overhead.o
00:39:18.396    LINK nvme_dp
00:39:19.332    LINK overhead
00:39:27.451    CC test/nvme/err_injection/err_injection.o
00:39:28.020    CC examples/nvme/hello_world/hello_world.o
00:39:28.020    LINK err_injection
00:39:29.398    LINK hello_world
00:39:32.688    CC app/spdk_nvme_discover/discovery_aer.o
00:39:33.256    LINK spdk_nvme_discover
00:39:43.235    CC test/nvme/startup/startup.o
00:39:44.172    LINK startup
00:39:44.740    CC test/nvme/reserve/reserve.o
00:39:46.118    LINK reserve
00:39:54.237    CC test/nvme/simple_copy/simple_copy.o
00:39:54.805    LINK simple_copy
00:40:12.896    CC test/nvme/connect_stress/connect_stress.o
00:40:12.896    LINK connect_stress
00:40:12.896    CC test/nvme/boot_partition/boot_partition.o
00:40:13.156    LINK boot_partition
00:40:13.429    CC test/rpc_client/rpc_client_test.o
00:40:14.378    LINK rpc_client_test
00:40:26.587    CC test/thread/poller_perf/poller_perf.o
00:40:26.587    LINK poller_perf
00:40:27.966    CC app/spdk_top/spdk_top.o
00:40:29.871    CC examples/nvme/reconnect/reconnect.o
00:40:30.808    LINK spdk_top
00:40:30.808    LINK reconnect
00:40:36.083    CC examples/nvme/nvme_manage/nvme_manage.o
00:40:37.020    LINK nvme_manage
00:40:39.554    CC examples/nvme/arbitration/arbitration.o
00:40:39.555    CC examples/nvme/hotplug/hotplug.o
00:40:40.508    LINK arbitration
00:40:40.508    LINK hotplug
00:40:41.885    CC examples/nvme/cmb_copy/cmb_copy.o
00:40:42.821    LINK cmb_copy
00:40:48.091    CC examples/nvme/abort/abort.o
00:40:49.027    LINK abort
00:40:54.298    CC app/vhost/vhost.o
00:40:54.556    LINK vhost
00:40:55.545    CC app/spdk_dd/spdk_dd.o
00:40:55.817    CC test/nvme/compliance/nvme_compliance.o
00:40:56.763    LINK spdk_dd
00:40:56.763    LINK nvme_compliance
00:40:58.666    CC test/thread/lock/spdk_lock.o
00:41:02.855    LINK spdk_lock
00:41:17.735    CC test/nvme/fused_ordering/fused_ordering.o
00:41:18.670    LINK fused_ordering
00:41:28.646    CC test/nvme/doorbell_aers/doorbell_aers.o
00:41:28.905    CC test/nvme/fdp/fdp.o
00:41:28.905    LINK doorbell_aers
00:41:29.842    CC test/nvme/cuse/cuse.o
00:41:30.101    LINK fdp
00:41:30.670    CC app/fio/nvme/fio_plugin.o
00:41:32.575    LINK spdk_nvme
00:41:33.512    LINK cuse
00:41:36.046    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:41:36.984    LINK pmr_persistence
00:41:39.517    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:41:40.454    LINK histogram_ut
00:41:42.387    CC app/fio/bdev/fio_plugin.o
00:41:44.292    LINK spdk_bdev
00:41:44.858    CC test/unit/lib/accel/accel.c/accel_ut.o
00:41:52.976    CC examples/sock/hello_world/hello_sock.o
00:41:52.976    LINK accel_ut
00:41:52.976    LINK hello_sock
00:42:11.064    CC examples/vmd/lsvmd/lsvmd.o
00:42:11.064    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:42:11.064    LINK lsvmd
00:42:19.182    CC test/unit/lib/bdev/part.c/part_ut.o
00:42:22.471    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:42:24.376    LINK blob_bdev_ut
00:42:25.754    LINK bdev_ut
00:42:27.659    LINK part_ut
00:42:37.643    CC test/unit/lib/blob/blob.c/blob_ut.o
00:42:41.853    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:42:41.853    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:42:41.853    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:42:42.421    LINK tree_ut
00:42:42.989    LINK scsi_nvme_ut
00:42:45.521    LINK blobfs_async_ut
00:42:46.088    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:42:49.372    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:42:49.631    LINK blobfs_sync_ut
00:42:50.566    LINK gpt_ut
00:42:52.468    LINK blob_ut
00:42:57.733    CC examples/vmd/led/led.o
00:42:57.992    LINK led
00:42:58.928    CC test/unit/lib/dma/dma.c/dma_ut.o
00:43:00.306    CC test/unit/lib/event/app.c/app_ut.o
00:43:00.306    LINK dma_ut
00:43:02.211    LINK app_ut
00:43:04.115    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:43:04.683    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:43:05.620    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:43:06.560    LINK blobfs_bdev_ut
00:43:06.560    LINK vbdev_lvol_ut
00:43:08.466    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:43:09.035    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:43:09.035    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:43:09.294    LINK reactor_ut
00:43:09.552    LINK bdev_zone_ut
00:43:09.811    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:43:10.070    LINK bdev_ut
00:43:11.005    LINK ioat_ut
00:43:12.910    LINK bdev_raid_ut
00:43:12.910    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:43:13.478    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:43:13.737    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:43:14.674    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:43:15.240    LINK conn_ut
00:43:15.498    LINK vbdev_zone_block_ut
00:43:15.757    LINK init_grp_ut
00:43:17.134    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:43:19.669    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:43:20.237    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:43:20.237    LINK json_util_ut
00:43:20.496    LINK json_parse_ut
00:43:20.755    LINK bdev_nvme_ut
00:43:21.692    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:43:21.692    LINK json_write_ut
00:43:22.260    LINK bdev_raid_sb_ut
00:43:23.197    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:43:23.456    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:43:24.024    LINK jsonrpc_server_ut
00:43:25.929    CC examples/nvmf/nvmf/nvmf.o
00:43:26.188    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:43:26.756    CC test/unit/lib/log/log.c/log_ut.o
00:43:26.756    LINK nvmf
00:43:27.015    LINK iscsi_ut
00:43:27.015    LINK log_ut
00:43:27.015    LINK concat_ut
00:43:27.582    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:43:30.913    LINK lvol_ut
00:43:30.913    CC test/unit/lib/notify/notify.c/notify_ut.o
00:43:30.913    CC test/unit/lib/iscsi/param.c/param_ut.o
00:43:30.913    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:43:31.176    LINK notify_ut
00:43:31.744    LINK param_ut
00:43:31.744    LINK portal_grp_ut
00:43:32.681    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:43:33.618    LINK raid1_ut
00:43:33.618    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:43:34.556    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:43:35.491    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:43:35.491    LINK nvme_ut
00:43:35.750    CC examples/util/zipf/zipf.o
00:43:36.009    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:43:36.269    LINK zipf
00:43:37.207    LINK tgt_node_ut
00:43:37.466    LINK tcp_ut
00:43:37.466    CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o
00:43:38.035    LINK ctrlr_ut
00:43:39.413    LINK raid5f_ut
00:43:41.948    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:43:44.505    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:43:45.441    LINK dev_ut
00:43:47.977    LINK nvme_ctrlr_ut
00:43:47.977    CC examples/thread/thread/thread_ex.o
00:43:48.545    CC examples/idxd/perf/perf.o
00:43:48.545    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:43:48.804    LINK thread
00:43:49.372    LINK idxd_perf
00:43:50.309    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:43:50.878    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:43:51.136    LINK subsystem_ut
00:43:52.073    LINK lun_ut
00:43:52.331    LINK ctrlr_discovery_ut
00:43:52.899    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:43:53.864    LINK scsi_ut
00:43:53.864    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:43:56.413    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:43:56.672    LINK nvme_ctrlr_cmd_ut
00:43:58.051    LINK nvme_ctrlr_ocssd_cmd_ut
00:43:58.619    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:43:58.619    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:44:00.525    LINK nvme_ns_ut
00:44:00.525    LINK scsi_bdev_ut
00:44:00.525    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:44:00.525    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:44:01.461    LINK ctrlr_bdev_ut
00:44:02.029    LINK nvmf_ut
00:44:02.967    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:44:03.536    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:44:04.104    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:44:06.008    LINK rdma_ut
00:44:06.008    LINK transport_ut
00:44:06.267    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:44:06.836    LINK nvme_ns_cmd_ut
00:44:07.095    LINK scsi_pr_ut
00:44:07.354    CC test/unit/lib/sock/sock.c/sock_ut.o
00:44:07.922    CC examples/interrupt_tgt/interrupt_tgt.o
00:44:08.491    LINK interrupt_tgt
00:44:10.396    LINK sock_ut
00:44:13.685    CC test/unit/lib/thread/thread.c/thread_ut.o
00:44:14.253    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:44:16.788    LINK thread_ut
00:44:16.788    CC test/unit/lib/sock/posix.c/posix_ut.o
00:44:17.047    LINK nvme_ns_ocssd_cmd_ut
00:44:18.953    LINK posix_ut
00:44:18.953    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:44:19.545    CC test/unit/lib/util/base64.c/base64_ut.o
00:44:20.507    LINK base64_ut
00:44:22.411    LINK nvme_pcie_ut
00:44:22.411    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:44:22.411    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:44:23.348    LINK bit_array_ut
00:44:23.608    LINK iobuf_ut
00:44:24.176    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:44:25.112    LINK pci_event_ut
00:44:26.490    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:44:27.057    LINK cpuset_ut
00:44:27.316    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:44:27.884    LINK crc16_ut
00:44:28.452    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:44:29.389    LINK subsystem_ut
00:44:30.327    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:44:30.327    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:44:30.586    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:44:30.586    LINK crc32_ieee_ut
00:44:31.153    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:44:31.719    LINK rpc_ut
00:44:31.978    LINK idxd_user_ut
00:44:32.546    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:44:32.805    LINK nvme_poll_group_ut
00:44:33.064    LINK crc32c_ut
00:44:34.974    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:44:34.974    CC test/unit/lib/util/dif.c/dif_ut.o
00:44:34.974    LINK crc64_ut
00:44:35.543    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:44:35.543    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:44:36.920    LINK idxd_ut
00:44:36.920    CC test/unit/lib/rdma/common.c/common_ut.o
00:44:37.179    LINK dif_ut
00:44:37.438    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:44:37.697    LINK common_ut
00:44:39.076    LINK vhost_ut
00:44:39.645    LINK nvme_qpair_ut
00:44:40.213    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:44:40.781    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:44:41.040    LINK ftl_l2p_ut
00:44:42.490    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:44:42.749    LINK ftl_band_ut
00:44:43.008    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:44:43.268    CC test/unit/lib/util/iov.c/iov_ut.o
00:44:43.268    LINK ftl_io_ut
00:44:43.836    LINK iov_ut
00:44:44.095    CC test/unit/lib/util/math.c/math_ut.o
00:44:44.354    LINK nvme_quirks_ut
00:44:44.613    LINK math_ut
00:44:44.872    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:44:45.809    LINK pipe_ut
00:44:45.809    CC test/unit/lib/util/string.c/string_ut.o
00:44:46.378    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:44:46.378    CC test/unit/lib/util/xor.c/xor_ut.o
00:44:46.378    LINK string_ut
00:44:46.637    LINK xor_ut
00:44:46.637    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:44:47.206    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:44:47.206    LINK ftl_bitmap_ut
00:44:47.206    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:44:47.465    LINK nvme_transport_ut
00:44:47.465    LINK ftl_mempool_ut
00:44:47.465    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:44:47.465    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:44:47.725    LINK nvme_tcp_ut
00:44:47.725    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:44:47.984    LINK ftl_mngt_ut
00:44:47.984    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:44:48.243    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:44:48.243    LINK ftl_sb_ut
00:44:48.243    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:44:48.243    LINK ftl_layout_upgrade_ut
00:44:48.502    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:44:48.762    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:44:48.762    LINK nvme_fabric_ut
00:44:49.021    LINK nvme_io_msg_ut
00:44:49.021    LINK nvme_pcie_common_ut
00:44:49.021    LINK nvme_opal_ut
00:44:49.958    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:44:49.958    LINK nvme_rdma_ut
00:44:50.547    LINK nvme_cuse_ut
00:45:29.265  json_parse_ut.c: In function ‘test_parse_nesting’:
00:45:29.265  json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without
00:45:29.265    616 | test_parse_nesting(void)
00:45:29.265        | ^
00:45:29.265   00:18:59	-- spdk/autopackage.sh@44 -- $ make -j10 clean
00:45:29.524  make[1]: Nothing to be done for 'clean'.
00:45:32.813   00:19:03	-- spdk/autopackage.sh@46 -- $ timing_exit build_release
00:45:32.813   00:19:03	-- common/autotest_common.sh@728 -- $ xtrace_disable
00:45:32.813   00:19:03	-- common/autotest_common.sh@10 -- $ set +x
00:45:32.813   00:19:03	-- spdk/autopackage.sh@48 -- $ timing_finish
00:45:32.813   00:19:03	-- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:45:32.813   00:19:03	-- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']'
00:45:32.813   00:19:03	-- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:45:32.813  + [[ -n 2095 ]]
00:45:32.813  + sudo kill 2095
00:45:32.822  [Pipeline] }
00:45:32.839  [Pipeline] // timeout
00:45:32.845  [Pipeline] }
00:45:32.859  [Pipeline] // stage
00:45:32.865  [Pipeline] }
00:45:32.879  [Pipeline] // catchError
00:45:32.888  [Pipeline] stage
00:45:32.893  [Pipeline] { (Stop VM)
00:45:32.906  [Pipeline] sh
00:45:33.186  + vagrant halt
00:45:36.473  ==> default: Halting domain...
00:45:46.519  [Pipeline] sh
00:45:46.800  + vagrant destroy -f
00:45:50.088  ==> default: Removing domain...
00:45:50.101  [Pipeline] sh
00:45:50.386  + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output
00:45:50.422  [Pipeline] }
00:45:50.455  [Pipeline] // stage
00:45:50.459  [Pipeline] }
00:45:50.467  [Pipeline] // dir
00:45:50.470  [Pipeline] }
00:45:50.479  [Pipeline] // wrap
00:45:50.483  [Pipeline] }
00:45:50.490  [Pipeline] // catchError
00:45:50.495  [Pipeline] stage
00:45:50.497  [Pipeline] { (Epilogue)
00:45:50.504  [Pipeline] sh
00:45:50.781  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:46:05.677  [Pipeline] catchError
00:46:05.679  [Pipeline] {
00:46:05.693  [Pipeline] sh
00:46:05.976  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:46:06.235  Artifacts sizes are good
00:46:06.244  [Pipeline] }
00:46:06.258  [Pipeline] // catchError
00:46:06.269  [Pipeline] archiveArtifacts
00:46:06.277  Archiving artifacts
00:46:06.526  [Pipeline] cleanWs
00:46:06.542  [WS-CLEANUP] Deleting project workspace...
00:46:06.542  [WS-CLEANUP] Deferred wipeout is used...
00:46:06.570  [WS-CLEANUP] done
00:46:06.572  [Pipeline] }
00:46:06.588  [Pipeline] // stage
00:46:06.593  [Pipeline] }
00:46:06.607  [Pipeline] // node
00:46:06.612  [Pipeline] End of Pipeline
00:46:06.677  Finished: SUCCESS